Every publisher knows Core Web Vitals matter for Google. Many publishers still treat them as a Q2-roadmap problem. So I stopped waiting for them to fix it and just measured: Lumina's PageSpeed tool hitting Google's official PSI API, both strategies, the 50 largest news sites in the DACH region, on the morning of publication. The result is bleak enough to name.
This is the third entry in the DACH media series. Study 1 measured which AI crawlers each publisher blocks. Study 2 measured how much content is hidden behind JavaScript. This one measures how fast any of it actually gets to a reader.
The numbers: 50 tested, 1 passes
One site in the whole sample clears mobile Core Web Vitals. That's puls24.com — an Austrian news-TV site running a trimmed, video-first stack. A second site, wienerzeitung.at, misses by a single criterion (LCP 3.2 s vs the 2.5 s budget) but is visibly the second-best at 91/100. Everyone else is in various states of slow.
50 DACH publishers, one passing score.
PSI Mobile + Desktop run against every site on the canonical DACH list (nytimes-tier national dailies, tabloids, public broadcasters, weeklies, tech titles). 48 sites returned clean data. 4 returned either fetch 400/502 or PSI timeouts. The pattern below survives the missing 4.
CLS is the one thing most DACH publishers do well. The median is 0.000 — layout shift essentially doesn't happen on the bulk of the cohort, after five years of Google pressure. The tail tells a different story: welt.de at 0.96 (mobile), wdr.de at 0.78, tagesanzeiger.ch at 2.0 — all far into the Poor zone (Google's threshold is 0.25). These three alone would fail Google's CLS test by an order of magnitude. The median is clean; the tail is not. LCP and TBT are where the damage lives for everyone else.
Why mobile is so much worse than desktop
Desktop median 63 vs mobile median 44. Same sites, same content, same servers — a 19-point gap. Three compounding reasons.
The first is CPU throttling. PSI runs mobile on a simulated mid-range Android: a simulated Moto G4 at 4× CPU slowdown. News-site pages ship 500 kB to 3 MB of ad-tech JavaScript. A desktop CPU chews through that in 200 ms. A simulated mid-range mobile takes 2 to 3 seconds. That's where LCP and TBT both go.
The second is image weight. LCP on a publisher homepage is the hero image or the top article thumbnail. Those are uploaded and served at desktop dimensions more often than not — 1600×900 JPEGs delivered over a mobile connection. Half the publishers in the sample don't serve AVIF at all. WebP is better (9 of 48 do it), but still not the norm.
The third is the consent-bidding-paywall stack. Piano paywalls, Sourcepoint or OneTrust CMPs, Google Ad Manager plus Prebid, ad-refresh loops — all run client-side before anything useful paints. On desktop the CPU eats it without complaining. On mobile the main thread is locked for a full second while the CMP boots, then another for Prebid's auction, then another for the ad creatives themselves.
LCP is the killer: median 8.9 seconds
Every publisher in this study has a story for why their LCP is high. The story is almost always the same: the hero image loads after a chain of blocking resources, lazy-load kicks in too late, and the image itself is 300 kB bigger than it needs to be.
The 21 sites with LCP above 10 seconds include household names: n-tv.de, krone.at, br.de, tt.com, servustv.com, sueddeutsche.de, stern.de, bild.de, tagesschau.de, zeit.de, nzz.ch. These are not low-traffic blogs. They're the top publishers in their markets. Their engineers know the PSI numbers. They ship the ad stack anyway because the ad stack pays the salaries.
The outliers at 33 seconds (br.de) and 32 seconds (krone.at) are worth naming. Both have visibly complex homepage layouts with multiple above-fold image slots and heavy editorial-preview JavaScript. Both fail not at one thing but at four: big hero image + slow TTFB + blocking scripts + late-firing lazy-load on images that should be eager.
Why 2.5 seconds matters
Google's LCP threshold isn't a round number. It's derived from user-research showing 75th-percentile LCP above 2.5 s correlates with measurable bounce-rate increase. PSI rates 0-2.5 s Good (green), 2.5-4.0 s Needs Improvement (orange), 4.0 s+ Poor (red). Of the 48 sites, 45 are in the Poor (red) zone on LCP. Only 1 site is in the Good (green) zone (puls24.com). Two sit in the Needs Improvement (orange) zone (wienerzeitung.at, derstandard.at).
The one that passes (and the one that almost does)
The 48-site cohort has a clean separation at the top. Two sites stand apart.
puls24.com — the only site that fully passes. LCP 817 ms, CLS 0.00, TBT 0 ms, overall 100/100. It's a video-TV homepage with a thin content shell. The "page" is really a shell that loads the video player lazily; the first paint is already complete. This is not a model a typical newspaper can copy directly, but it shows what a DACH publisher on a modern CDN can achieve when the ad layer is light.
wienerzeitung.at — 91/100, LCP 3.2 s, misses CWV only on the LCP budget. Built on Astro (a static-first framework). Runs GTM, Google Ad Manager, and a consent layer like the rest of the field, but the base HTML is pre-rendered and the hydration is selective. Same ad obligations as a Süddeutsche, fundamentally different start-of-paint footprint.
After these two, the next-best are derstandard.at (74/100, LCP 3.4 s), then a large drop to golem.de (40/100) and handelsblatt.com (34/100). The shape of the cohort is: two clear leaders, one competent follower, everyone else in various stages of slow.
Tech stack isn't fate
I detected the tech stack on every site using Lumina's Tech Stack Detector logic in parallel with the PSI run. The headline counter-intuitive finding: the framework barely predicts the score.
| Stack (detected) | Sites | Avg Mobile Score | Examples |
|---|---|---|---|
| Next.js | 6 | 35 | 20min.ch, blick.ch, n-tv.de, news.at, trend.at, servustv.com |
| Sophora CMS | 3 | ~ 42 (br.de only) | br.de, tagesschau, ndr |
| WordPress | 2 | 1000things.at at 76 | 1000things.at (reference site) |
| Astro | 1 clean + 1 ambiguous | 91 (wienerzeitung.at) | wienerzeitung.at; servustv.com also matched but scored 30 |
| Nuxt | 1 | 35 | faz.net |
| SvelteKit | 1 | 41 | tagesspiegel.de |
| Drupal / InterRed / custom | 35+ | 44 (cohort median) | most major outlets — no framework fingerprint in HTML |
Next.js sites in the sample average 35/100 on mobile — below the cohort median. That's not Next.js's fault. The six Next.js sites include 20min.ch and blick.ch (Swiss tabloids on Ringier's stack), n-tv.de (German news channel), news.at and trend.at (Austrian news weeklies), and servustv.com (Austrian news-TV). Every one of them ships a heavy ad and analytics layer. Next.js didn't cause the LCP problem and didn't prevent it.
The cleanly-detected Astro site — wienerzeitung.at — scores 91, the clear second-place in the cohort. Astro's "zero JavaScript by default" philosophy pairs well with publisher content: SSR-first output means the browser can paint the hero image before any hydration runs. A second site was flagged by both Astro and Next.js signatures (servustv.com); I treat it as a stack collision, not an Astro credit.
1000things.at, the WordPress reference site outside the cohort, scores 76 on mobile — better than the cohort median, better than all 6 Next.js sites. WordPress isn't fate either.
Takeaway: framework choice is worth maybe 10 points of CWV slack. Ad stack weight is worth 50 points. Pick the right framework if you're starting over. Remove one tracker per month if you aren't.
The real cost: the ad stack
Median mobile TBT across the cohort is 864 ms — meaning half the sites block the main thread for almost a full second beyond the 200 ms budget. That time is almost entirely ad-tech.
The consent-bidding-ads layer I saw across the sample:
- Consent management. 8 sites on OneTrust, 8 on Sourcepoint, 2 on Usercentrics. Every CMP adds a blocking script that must run before other tags can fire. Even the "fast" CMPs add 150-300 ms on mobile.
- Header bidding. Prebid.js or equivalent runs a real-time auction before any ad can render. Auction timeouts are commonly 800-1500 ms. That time is added to TBT directly.
- Ad creatives. Once the auction completes, the winning creative loads — often a 200 kB video or animation. Another 400-800 ms of main-thread work.
- Paywall metering. 9 of the sites use Piano. Piano's client-side decisioning fires on every pageview and gates content. It's non-trivial work even when no paywall prompt is actually shown.
- Tracking scripts. 14 sites declare Google Tag Manager, plus a long tail of Matomo, Adobe Analytics, Microsoft Clarity, and Sentry. Each tag is a small burden; 15 tags are not.
None of this is broken. All of it is normal for a 2026 publisher. The problem is that publishers have added these layers one at a time over a decade without ever doing a top-down budget of what the combined cost is. This study is that budget, for the whole cohort, at a single point in time.
What publishers can actually do
Six interventions that have shipped CWV gains at real publishers, in rough order of cost-to-benefit:
- Set a hero-image preload and an AVIF source. The single biggest LCP lever is making the hero image load first and small. A
<link rel="preload" as="image" fetchpriority="high">plus an AVIF variant routinely drops LCP 30-50% on news homepages. - Move the CMP to a lazy-boot pattern. OneTrust and Sourcepoint both support a deferred boot where the banner shows after first paint. Costs you nothing legally (the banner still blocks non-essential cookies before consent), saves 200-400 ms of TBT.
- Cut one tracker. Most publishers have 3-5 analytics tags. Matomo + GA + Adobe is not a feature; it's a committee artifact. Removing one is a 50-150 ms TBT win.
- Switch GTM to an interaction-gated load. Lumina does this on its own pages. GTM only injects after the first scroll, click, or keydown. Lighthouse's unused-JavaScript finding vanishes. Real-user analytics loss is <2% (bounced sessions you weren't going to monetize anyway).
- Adopt SSR-first framework choices on greenfield. If you're building a new site or rewriting a section, pick Astro, Next.js with streaming SSR, or plain server-rendered HTML. Avoid CSR-first frameworks for content pages. Save the React-heavy stuff for interactive dashboards.
- Run PSI weekly, track the median. Performance is a ratchet that tightens when nobody's watching. Every new ad partner, new tracking pixel, new front-end package adds weight. A weekly median across your top 20 pages surfaces the regressions before the quarterly CWV review.
The publishers that take LCP seriously in 2026 will have a measurable Google ranking advantage over the ones that treat it as a Q2 problem. And increasingly, the AI search engines reward fast sites too: Vercel's data shows ChatGPT spends 34% of its crawl on 404s and 14% following redirects. Slow pages get abandoned before they finish loading, in AI crawls just like in user sessions.
Methodology
I ran Google's PageSpeed Insights API against the 50 largest DACH news sites on 2026-04-20, both mobile and desktop strategies, category performance. The list is the same I used in Study 1 and Study 2 — a canonical set of AT + DE + CH dailies, tabloids, public broadcasters, weeklies, and tech titles.
Each measurement is a single lab run of Lighthouse through the official PSI endpoint. PSI variance between consecutive runs is real — Lumina's PageSpeed tool reports around ±30-320 ms on mobile TBT between runs. Where numbers felt unusually off, I retested; retests stayed within 5 points. The medians and the "1 of 48 pass" headline are robust across noise.
Four sites returned either a 400 from the fetch (sueddeutsche.de, zdf.de, falter.at), a 502 (tagesanzeiger.ch), or a PSI timeout (handelsblatt.com on one strategy, retried). I report on the 48 sites I could cleanly measure. The 4 I couldn't are not excluded for being too slow — they're excluded for infrastructure reasons unrelated to their PageSpeed behavior.
Tech stack detection uses the same pattern-matching logic that powers Lumina's Tech Stack Detector, run against the raw HTML returned by the fetch. It catches declared assets — CMS markers, framework hydration fingerprints, CDN scripts, analytics pixels. It doesn't catch backend-only technologies or tools that are embedded in iframes. Numbers in the "Stack" table are reliable for what's declared client-side, conservative for what isn't.
FAQ
Where to start
Five steps, in order, for a publisher that wants to move the median this quarter:
Run Lumina's PageSpeed tool or the official PSI against your homepage plus your three most-trafficked article templates. Set mobile as the default strategy. Free, no signup.
PageSpeed Tool →Add <link rel="preload" as="image" fetchpriority="high"> for the above-fold hero image. Swap its format to AVIF where possible. Single-biggest LCP lever on news homepages.
Most DACH publishers have 15-30 third-party scripts loading in parallel. Lumina's Tech Stack Detector surfaces the declared ones. Remove one tracker, cut 50-150 ms of TBT.
Tech Stack Detector →GTM doesn't need to load before first paint. An interaction-gated loader (scroll, click, keydown) drops 60-200 ms of TBT with negligible analytics loss. Same pattern works for most CMPs.
Why deferred loading works →A weekly PSI run across your top 20 pages catches regressions before the next quarterly review. Performance decays silently when nobody is watching the graph.
GSC Dashboard →Benchmark your site against the DACH cohort
Lumina's free PageSpeed tool runs the same PSI mobile + desktop test I used for this study. One URL, no signup, honest numbers — plus an AI Fix Engine that generates copy-ready code for the specific issues it finds.
Run the PageSpeed Tool →