Every SEO blog has a Core Web Vitals article. None of them pass Core Web Vitals themselves. I ran Lumina's PageSpeed tool against 10 widely-cited CWV and page-speed articles from the top of Google EN and DE. In the field data (what Google actually uses for ranking), zero of the five English-language articles show a "Good" CWV status. Ahrefs, the top SEO vendor result, scores in the high 70s on lab PSI but ships a real-user INP of 390 milliseconds, nearly twice the 200 ms threshold. Moz ranks page one for page speed with a mobile PSI around 17, its field INP lands at 477 ms. Backlinko's guide is 373 days stale and misses Good on INP at 248 ms, just over the 200 ms threshold.

Core Web Vitals are three metrics Google uses to score how a page feels to a real user. Largest Contentful Paint (LCP) measures when the main content appears. Interaction to Next Paint (INP) measures how fast the page reacts to clicks and taps. Cumulative Layout Shift (CLS) measures how stable the layout stays while the page loads. The scores come from Chrome's field data, the 28-day rolling averages of real users hitting your site, not from a lab simulation on a single machine.

This guide is the complete evergreen reference for Core Web Vitals in 2026. The three metrics with concrete 2026 thresholds, why your PSI score and your Search Console report rarely agree, the five tools that actually measure CWV correctly, the one-line answer on whether CWV moves rankings, and a named-tactic breakdown for fixing each of LCP, INP, and CLS. The data powering the Live Audit below is first-party: Lumina's PageSpeed tool ran a PSI probe and a /deep schema fetch against all 10 competitors on 2026-04-22.

What Core Web Vitals Actually Measure

Three metrics, one job each. They together answer the question "does this page feel fast for a real user on a real device?" Better than any single number ever did.

Largest Contentful Paint (LCP): how fast the main content appears

LCP is the time from when the page starts loading to when the largest visible element in the viewport finishes rendering. That element is usually the hero image, the main heading, or a large block of body text. It's the frame where the user first feels "OK, the page is here." The clock starts at navigation. The clock stops at that one paint event. Everything after is out of scope.

Good LCP: 2.5 seconds or less at the 75th percentile. Poor: more than 4 seconds. Most sites that fail LCP are failing because of one of four things: an unoptimized hero image (too large, wrong format, no fetchpriority="high"), render-blocking third-party scripts, slow server response time (TTFB), or a web font that blocks text rendering. In Lumina's CWV audit, Moz's page hits 22–27 seconds LCP in lab (run-to-run variance, both catastrophic) because of a render-blocking cookie banner and an oversized hero. Dynatrace hits 28.7 seconds because their docs framework lazy-loads every image, including the above-fold one.

Interaction to Next Paint (INP): how fast the page reacts

INP measures the latency of every user interaction on the page: clicks, taps, key presses. It captures the full round-trip from the input event to the next paint that visibly acknowledges it, including event handler time, DOM updates, style and layout recalculation, paint. Google takes the 98th percentile across all interactions on the page, so one slow click ruins the score but a handful of normal clicks do not.

Good INP: 200 milliseconds or less. Poor: more than 500 ms. INP replaced First Input Delay (FID) on March 12, 2024. FID measured only the first interaction's input delay and was easy to game; INP captures full interaction latency across every interaction type. The sites that fail INP are running heavy JavaScript on the main thread, things like analytics scripts that block for 300 ms on click, React re-renders that iterate thousands of virtual DOM nodes, event handlers wrapped in a setTimeout(0) queue that does not drain fast enough. Ahrefs scores 390 ms field INP in Lumina's audit. Moz scores 477 ms.

Cumulative Layout Shift (CLS): how stable the layout stays

CLS tracks unexpected layout shifts, which happen when elements on the page jump position as new content loads. Every shift contributes a score based on how much of the viewport moved and how far. Google sums the shifts within session windows (5-second gaps) and takes the largest window as your CLS. Clicks and taps within 500 ms of a shift don't count (so user-triggered animation is fine).

Good CLS: 0.1 or less. Poor: more than 0.25. The usual causes: images without explicit width and height attributes (the browser reserves no space until the image loads, then pushes content down), ads or embeds that inject dynamically, web fonts that swap mid-paint and resize the text block, and hero sections with a late-loading video that pushes the article below it. Google Developer Docs itself fails CLS in the Lumina audit at 0.16 field. A content-freshness banner shifts the layout after initial paint.

Live Audit · 2026-04-22

10 widely-cited CWV articles: what they miss.

Ran Lumina's PageSpeed tool and Schema Validator (JS-rendered fetch) against 10 reference articles on Core Web Vitals and page speed, drawn from the top organic results on google.com and google.de: Google Docs, Dynatrace, Ahrefs, Backlinko, Moz (EN) plus Ryte, Seobility, Sistrix, SEO-Küche, Semrush DE. First-party field data via Chrome UX Report.

0/5
EN pages pass CWV (field)
Google Docs fails CLS (0.16). Ahrefs fails INP (390 ms) and CLS (0.12). Moz misses Good on LCP (3.3 s) and INP (477 ms) with a 17/100 lab PSI score; CLS itself passes. Dynatrace and Backlinko miss Good on INP (259 ms, 248 ms). All 5 English articles about CWV fail CWV themselves.
5/10
still cite FID over INP
Ahrefs mentions FID 48 times vs INP 29 (clean body count). Moz: 4 FID, 0 INP. Ryte: 18 FID, 0 INP. SEO-Küche: 19 FID, 0 INP. Semrush DE: 11 FID, 0 INP. Google promoted INP on 2024-03-12. Five of the ten never caught up.
0/10
ship FAQPage schema
Not one of Google Docs, Dynatrace, Ahrefs, Backlinko, Moz, Ryte, Seobility, Sistrix, SEO-Küche, Semrush DE ships FAQPage. The cheapest AI-citation schema on an evergreen topic. Open first-mover win.
2/10
ship no usable schema
Ahrefs (Google's #9 ranker for core web vitals) ships one JSON-LD block with no recognizable @type. Ryte Wiki (#2 DACH ranker) ships zero JSON-LD blocks. AI retrievers can't build an entity graph from either page.
5/10
publish no dateModified
Half the field signals freshness, half doesn't. Published dates (schema or article:modified_time): Sistrix 83 days, Seobility 271, Backlinko 373, Moz 379, Ahrefs 455. Silent on freshness: Google Docs, Dynatrace, Ryte, SEO-Küche, Semrush DE. Avg staleness of those that do: 312 days.
~80 → 390
Ahrefs lab vs field
Ahrefs: lab PSI in the high 70s to low 80s (variance run to run, "looks great"). Field INP 390 ms, stable across runs, nearly 2× the 200 ms threshold. Lab throttles CPU but does not simulate real interaction patterns. The lab-vs-field gap is the story of CWV.

Run the same audit on any URL →

The 2026 Thresholds

Google publishes three thresholds per metric: Good, Needs Improvement, Poor. You qualify for "Good" on a metric only when the 75th percentile of your real users hits that bucket. That means 25% of your traffic can still be in the Poor bucket and you still "pass". Google optimizes for the median-to-middling experience, not the best case.

Metric Good (≤ 75% at) Needs Improvement Poor (>)
LCP (Largest Contentful Paint) 2.5 s 2.5 s – 4.0 s 4.0 s
INP (Interaction to Next Paint) 200 ms 200 ms – 500 ms 500 ms
CLS (Cumulative Layout Shift) 0.1 0.1 – 0.25 0.25

A page is "Good on Core Web Vitals" only when all three metrics are Good at the 75th percentile. One Poor metric drops you out. Most sites that fail CWV fail exactly one metric, usually INP in 2026, because JavaScript has kept growing but server-rendered HTML has gotten faster. Backlinko is a clean example: LCP field 1.9 s (Good), CLS 0.00 (Good), INP 248 ms (Needs Improvement). Close, but no pass.

Field Data vs Lab Data

The single biggest source of CWV confusion: two tools can give you wildly different answers for the same page. Both are right. They're measuring different things.

Field data is the 28-day rolling aggregate of real Chrome users hitting your page. Google collects it via the Chrome User Experience Report (CrUX), which collects anonymized metrics from users who opted in. Field data is what Google actually uses for ranking. It's surfaced in PageSpeed Insights at the top of the report, in Search Console's Core Web Vitals report, and in the raw CrUX BigQuery dataset for site-wide analysis. Field data is the truth about how your users actually experience the page.

Lab data is a single simulated run on a throttled mobile profile. Lighthouse inside Chrome DevTools or the PageSpeed Insights lab run both do this. The profile: 1.6 Mbps down, 750 Kbps up, 150 ms round-trip time, 4× CPU slowdown. It's an approximation of a mid-range mobile device on a 4G connection. Lab data is reproducible: same inputs, same result. It's how you catch regressions before you ship, how you debug a specific metric, how you profile a change. But no single real user is on this exact profile.

When field and lab disagree, the gap tells you something. Ahrefs in Lumina's audit has a field LCP of 2.1 s (Good) but a lab LCP in the 4–5 s range. That's Chrome's production users finishing first paint faster than the throttled lab simulation, probably because Ahrefs ships aggressive preconnects and CDN caching that real browsers optimize around, while Lighthouse runs cold every time. The opposite pattern, where lab passes but field fails, usually means real users have slow devices or ad-blockers that break your JavaScript.

Trust field data for ranking decisions. Trust lab data for debugging.

A page that passes Lighthouse but fails CrUX will not rank on page experience. Google uses field. A page that fails Lighthouse but passes CrUX is usually fine. Real users just just aren't hitting the lab bottleneck. But lab is still where you catch regressions before they reach users. Use both. Do not substitute one for the other.

How to Measure Core Web Vitals

Five tools. Each has a job. The right one depends on what you're trying to answer.

PageSpeed Insights (field + lab)

The canonical starting point. Paste a URL, get both field (CrUX) and lab (Lighthouse) scores on desktop and mobile. Free, no login, identical API to every other PSI wrapper. Lumina's PageSpeed tool adds AI-generated fix snippets from the audit data.

Lumina PageSpeed →
Search Console (site-wide field)

The Core Web Vitals report shows pass/fail status for every page on your site, grouped by URL patterns. Uses field data only, 28-day rolling window. Best for spotting which URL clusters regressed after a deploy.

GSC Dashboard →
Lighthouse in DevTools (lab debug)

Chrome DevTools → Lighthouse tab → generate report. Single-URL lab run, same engine as PSI. Use it when you need the trace tab, the LCP element breakdown, or to reproduce a specific regression offline.

Lighthouse docs →
web-vitals.js (real user monitoring)

Google's 3 KB JavaScript library. Drop it in any page, pipe the callbacks to your analytics endpoint. You get per-user CWV data for your own site, without waiting for CrUX to include you. The only way to measure CWV on low-traffic URLs.

web-vitals on GitHub →
CrUX BigQuery (historical field)

Google publishes the full CrUX dataset to BigQuery every month. Query any origin's CWV going back years, aggregate across competitors, benchmark your vertical. Free query tier covers most single-site use; commercial if you're benchmarking an industry.

CrUX BigQuery →

What you do not need: third-party RUM vendors for CWV alone. web-vitals.js plus your existing analytics covers 95% of what paid tools offer. Buy a commercial RUM product only if you need user-session replay or cross-service tracing on top of CWV — not for CWV itself.

Do Core Web Vitals Affect Google Rankings?

Marginally. Google calls page experience a "tiebreaker" between pages of similar quality. It is a ranking signal, but a small one. Content quality and backlinks dominate the SERP far more than CWV does in 2026, same as they did in 2021 when Google rolled CWV out as a signal.

The case against CWV as a strong ranking factor is on the SERP itself. Moz ranks page one for page speed with a mobile PSI score around 17 out of 100, a page that misses Good on LCP and INP in Chrome field data. Ahrefs ranks #9 for core web vitals despite failing INP by a factor of two. These are not edge cases. If CWV were a strong ranking factor, pages that catastrophically fail CWV would not sit in the top 10 for CWV-related keywords. They do. Often.

The case for caring about CWV anyway is not rankings; it's the users you already have. A page with a 5-second LCP loses real users before they see the content. A page with 500 ms INP feels broken on every click. A page with CLS 0.3 ejects users mid-read when the layout jumps. These are conversion problems and retention problems. The ranking bump is a rounding error next to the UX loss.

One caveat on the "marginal ranking" point: Google weighs CWV into the page experience signal more heavily for competitive SERPs with similar content quality. In a tight race between two pages at the same topical depth and authority, the one that passes CWV wins the tiebreaker. If you're already ranking #1, CWV is cosmetic. If you're fighting for #4 against three equivalent pages, it's where you gain the tiebreaker edge.

How to Fix LCP, INP, and CLS

Each metric has its own failure modes and its own fix catalog. Skip the general "make the page faster" advice; the specific tactic depends on which metric is failing.

Fix LCP: make the largest element arrive first

Identify the LCP element first. PageSpeed Insights tells you in the Diagnostics section. Look for "Largest Contentful Paint element" with the exact HTML. It's usually a hero image, a heading, or a large text block. Once you know what it is, the fix catalog is:

  • Preload the LCP image. Add <link rel="preload" as="image" href="..." fetchpriority="high"> in the head. This starts the fetch before the browser discovers the <img> tag during parsing. Saves 100–400 ms on most sites. Only add it if you have verified the element is actually the LCP, since preloading the wrong image slows your real LCP down (see Lumina's own homepage LCP investigation from 2026-04-20 for the pattern).
  • Serve the LCP image in WebP or AVIF. WebP saves 25–35% vs JPG at matched quality. AVIF another 20% on top. Both are supported by every browser in 2026. Cloudflare Images, ImageKit, Next.js Image, or a build-time conversion script covers this.
  • Set loading="eager" and fetchpriority="high" on the LCP image. The default is loading="lazy" in some CMSes, which delays the fetch by hundreds of milliseconds on above-fold images. Eager + high priority signals the critical path.
  • Fix the server's Time To First Byte (TTFB). Every millisecond of TTFB adds to LCP directly. Cache at the edge (Cloudflare, Fastly, Netlify). If your app server takes 800 ms to render, no front-end optimization recovers it.
  • Inline the critical CSS. External stylesheets block the first paint. Extract the CSS needed for above-the-fold content and inline it in the head. Defer the rest with <link rel="preload" as="style" onload="this.rel='stylesheet'">.

Fix INP: keep the main thread responsive

INP fails when JavaScript blocks the main thread during a user interaction. The fix catalog is all about shortening long tasks:

  • Break up long tasks. Any synchronous JavaScript block running > 50 ms is a "long task" in Chrome's vocabulary. Split it with scheduler.yield() (modern) or await new Promise(r => setTimeout(r, 0)) (compatible) inside loops.
  • Defer non-essential third-party scripts. Analytics, chat widgets, ad scripts, heatmap trackers. Load them after first interaction, not at page load. Lumina's own GTM setup uses interaction-only deferral (scroll, click, keydown, mousemove) to keep gtag.js out of the pre-interaction main thread.
  • Avoid forced layout inside event handlers. Reading offsetWidth, getBoundingClientRect, clientHeight right after a DOM write forces the browser to recalculate layout synchronously. Batch reads before writes; use requestAnimationFrame when you need layout data.
  • Use CSS transitions instead of JavaScript animations. CSS runs on the compositor thread; JS animations on the main thread. A dropdown that uses transition: opacity 0.2s responds in 16 ms; the same dropdown with a 200-line JS handler takes 200 ms+ on slow devices.
  • Cut React re-renders. React components re-rendering on every interaction, especially context providers with frequently-changing state, cascade into hundreds of virtual DOM diffs. Memoize with React.memo, use refs for non-UI state, split context by update frequency.

Fix CLS: reserve space before content arrives

CLS is fixable with consistent attention to layout stability. The fix catalog is short but often ignored:

  • Always set width and height on every <img>. The browser reserves space via the aspect ratio before the image loads. Without these attributes, the image inserts with zero height initially, then pushes everything below down when it paints.
  • Reserve space for ads, embeds, and dynamically-injected content. Wrap the slot in a <div> with a fixed min-height matching the expected content size. When the ad loads, it fills the reserved space instead of pushing the article.
  • Use font-display: swap carefully. It prevents invisible text but triggers a layout shift when the web font loads and replaces the fallback. If the fallback and web font have similar metrics (use size-adjust, ascent-override, descent-override on the @font-face), the swap is invisible. If they don't, the text reflows.
  • Avoid inserting content above existing content after load. Banner notifications, cookie prompts, newsletter overlays. If they render at the top of the page mid-load, they shift everything down. Render them as fixed-position overlays instead.

After the fixes, verify with the Lumina PageSpeed tool (field + lab in one report), run JS vs No-JS to see how much CWV depends on your JavaScript layer, and check the Alt Text Checker for missing image dimensions that cause CLS. All three are free, no login.

FAQ

What are Core Web Vitals in simple terms?+
Core Web Vitals are three metrics Google uses to score how a page feels for a real user: Largest Contentful Paint measures how fast the main content appears, Interaction to Next Paint measures how quickly the page responds to clicks and taps, and Cumulative Layout Shift measures how stable the layout stays while loading. They are based on Chrome field data from real users, not lab simulations, and Google uses the 75th percentile of your real-world traffic as the threshold.
Will failing Core Web Vitals hurt my Google rankings?+
Marginally. Google calls page experience (which includes CWV) a tiebreaker between pages of similar quality. It's a ranking signal, but a small one. Content and backlinks still dominate. Moz ranks page one for 'page speed' with a mobile PSI around 17 out of 100, a page that misses Good on LCP and INP in Chrome field data. Ranking loss from a CWV fail alone is rare. What you actually lose is users: a page with a 5-second LCP loses visitors before they see the content.
When did INP replace FID as a Core Web Vital?+
Google promoted Interaction to Next Paint to a Core Web Vital on March 12, 2024, replacing First Input Delay. FID measured only the first interaction's input delay, a 100 ms window easy to game. INP measures the 98th percentile of all interaction latencies on the page, including clicks, taps, and key presses, end-to-end from input to next paint. Five of the 10 top-ranking articles about CWV today still reference FID more than INP (or don't mention INP at all). If your article or your codebase still uses FID, it predates March 2024 and needs an update.
Why does my Core Web Vitals report show "Insufficient data"?+
Google's field data comes from the Chrome User Experience Report (CrUX), which only includes pages with enough real Chrome traffic to produce a stable 28-day rolling sample. Low-traffic pages, new pages, and pages behind authentication have insufficient CrUX data. The fix is not technical; you need traffic. Until then, lab tools (PageSpeed Insights lab run, Lighthouse in DevTools) are your only signal. Lab data approximates a real user on a throttled 4G connection, different from field, but still useful for catching regressions.
Do Core Web Vitals matter more on mobile or desktop?+
Mobile. Google ranks mobile and desktop separately and applies mobile CWV to mobile-first indexing, which is how Google crawls almost every site in 2026. Desktop CWV matters for desktop rankings but the thresholds are identical and desktops are fast enough that most sites pass by default. On mobile the throttled connection and less-powerful CPU expose every rendering sin. Optimize for mobile first. Desktop scores will follow.
What's the difference between field data and lab data?+
Field data is the 28-day rolling aggregate of real Chrome users hitting your page, collected via CrUX and surfaced in PageSpeed Insights, Search Console's Core Web Vitals report, and the CrUX BigQuery dataset. Lab data is a single simulated run on a throttled mobile profile (1.6 Mbps, 150 ms RTT, 4x CPU slowdown), executed by Lighthouse inside Chrome DevTools or PSI. Field is what Google actually uses for ranking. Lab is how you catch regressions before they ship. Both matter. When they disagree (as they often do), field is the truth for users; the gap is telling you something about which parts of the experience real browsers optimize better than Lighthouse simulates.

Where to Start

Five moves in order. A 30-minute baseline for a small site, half a day for a medium one:

Get a field-data baseline

Run your homepage and top 5 content URLs through Lumina's PageSpeed tool. Write down the field LCP, INP, CLS at the 75th percentile per URL. This is your "today" number, not lab. Only field moves Google.

PageSpeed tool →
Find the one metric that fails most

Of the 3 metrics, usually one is the blocker site-wide. Fix it first, not all three in parallel. If it's LCP, jump to image + TTFB fixes. If INP, JS main-thread work. If CLS, image dimensions + ad slot reservation.

The fix catalog ↑
Install web-vitals.js for your own RUM

CrUX has a 28-day lag and skips low-traffic pages. Drop Google's web-vitals.js library into your page, pipe callbacks to GA4 or your analytics endpoint. You get near-real-time CWV on every URL.

GA4 Dashboard →
Test every deploy in the lab

Lighthouse inside Chrome DevTools is fast, reproducible, and free. Add a Lighthouse check to your pre-deploy checklist. Field data takes 28 days to react; lab data catches the regression in 30 seconds.

Lighthouse docs →
Watch Search Console monthly

Google's CWV report in Search Console shows URL clusters that regressed. A single deploy can degrade hundreds of URLs sharing a template. Check once a month, fast-fix patterns.

GSC Dashboard →

Audit Core Web Vitals in 30 seconds

Lumina's free PageSpeed tool runs both the PSI lab audit and the CrUX field data side by side, plus AI-generated fix snippets for the top opportunities. No login, no rate limit past 10 AI runs per day. Same tool that powered the Live Audit in this guide.

Run the PageSpeed tool →