JavaScript SEO is the gap between what your framework renders and what a crawler actually sees. Sometimes the gap is zero — a server-rendered Next.js page ships full HTML, every word indexed by Googlebot, every word read by ChatGPT. Sometimes the gap is everything — a client-side React SPA ships a single empty <div id="root"> and four megabytes of JavaScript, and the AI crawler that just visited extracted nothing. In 2026 the cost of getting this wrong is bigger than it has ever been, because there are now two distinct crawler classes with two completely different rendering behaviors. I ran Lumina's JS vs No-JS tool against the 12 top-ranking guides for javascript seo on Google EN and DE, and the pattern is bleak: zero ship FAQPage schema, four still teach Dynamic Rendering as a current rendering mode without flagging Google's stance (Google's own docs call it "a workaround and not a long-term solution"), and Onely's "Ultimate Guide" — currently rank 7 on Google US — has not had its dateModified bumped since March 2020.
This guide is the complete evergreen reference. What JavaScript SEO is, how Google's three-phase rendering pipeline actually works, the four rendering modes and which one to pick in 2026, the seven problems that cause most production failures, the new and largely-unwritten rules for AI crawler visibility, framework-by-framework patterns for Next.js / Nuxt / SvelteKit / Astro / Remix, how to test it, and why Dynamic Rendering should be off your shortlist. No fluff. Code where it matters. If you want to test your own site while reading, open Lumina's JS vs No-JS tool in a second tab.
What JavaScript SEO Actually Is
JavaScript SEO is the practice of making sure search engines and AI crawlers can read content that JavaScript adds to a page. Every modern site has two HTML versions: the raw HTML the server returns, and the rendered HTML after every script runs. JavaScript SEO is the gap between them, and the gap matters because not every crawler closes it.
You can see both for yourself. The raw HTML lives in view-source: in the browser. The rendered HTML lives in the Elements panel of DevTools. The discipline of JavaScript SEO is keeping those two close enough that every crawler — Googlebot, GPTBot, ClaudeBot — sees the same thing as a human user.
For a static HTML site with no JavaScript, the gap is zero. The two HTML versions are identical. Every crawler reads the same content. For a site built with React, Vue, or Angular and shipped as a client-side SPA, the gap is everything — the raw HTML is a near-empty shell, and the visible content only appears after the JavaScript bundle runs in the browser. Most real sites are somewhere in between: a rendered framework that ships the main content as static HTML and uses JavaScript for interactivity.
The reason this matters in 2026 is that crawlers split into two classes with very different behavior. Googlebot uses a headless Chromium and renders JavaScript reliably (with caveats discussed below). The new generation of AI crawlers — GPTBot from OpenAI, ClaudeBot from Anthropic, PerplexityBot, Google's own Google-Extended for Gemini training — fetch raw HTML and walk away. They do not run JavaScript. They do not wait for hydration. They do not retry. The CSR site that ranks fine in Google can be invisible to ChatGPT.
How Google Renders JavaScript: The Two-Wave Reality
Google handles JavaScript in three sequential phases: crawling, rendering, indexing. Pages with a 200 status code enter a rendering queue after the initial crawl, where a headless Chromium executes the JavaScript and feeds the rendered HTML back into indexing. Most JavaScript SEO problems live in the gap between those phases, because rendering is neither instant nor guaranteed.
Phase 1 — Crawling. Googlebot fetches the URL and reads the raw HTML response. Status code, response headers, anything in the head, anything in the body that exists before any script has run. This is the version that AI crawlers also see.
Phase 2 — Rendering. Pages that returned a 200 status code (and aren't blocked by robots meta tags) enter a rendering queue. When Google's resources allow, a headless Chromium executes the page, runs the JavaScript, builds the rendered DOM. Google's documentation says the queue can take "a few seconds" but "can take longer." There is no SLA. Rendering competes with crawl budget, with other rendering jobs, with Google's own infrastructure load.
Phase 3 — Indexing. Once rendering completes, Google takes the rendered HTML and runs its indexing pipeline against it. Title, meta description, headings, links, structured data, content. Anything that JavaScript added becomes available for ranking.
The thing every SEO writer has been saying for years — "Google's second wave of indexing can take days or weeks" — was true in 2018, when Martin Splitt first publicly described the system. It is more nuanced today. For most pages, rendering happens within seconds of the initial crawl. For very large sites with heavy JavaScript, or for sites Google has decided don't deserve much rendering budget, the queue can stretch. The risk has gone down; it has not gone to zero. Pages that depend entirely on JavaScript still get indexed slower than pages that ship full HTML, and the indexing of newly-added or edited content lags behind. If you publish a hot piece on a CSR site, it might rank ten minutes later or it might rank tomorrow.
The crawler does not behave like your laptop.
Googlebot's headless Chromium has memory caps, time caps, and CPU caps. It runs hundreds of pages per second across the index. If your page takes 8 seconds to fully hydrate, Google may render the partial state and move on. Lazy-loaded sections that depend on intersection observers may never trigger because the headless browser doesn't scroll. Pop-ups that intercept the click flow won't dismiss themselves. Build pages assuming the renderer is patient but not infinitely patient.
SSR, CSR, Static, Hybrid: Pick One
Modern frameworks support five distinct rendering modes: server-side (SSR), static site generation (SSG), incremental static regeneration (ISR), client-side (CSR), and hybrid. The choice you make here is the single biggest decision for JavaScript SEO. Four of the five are safe for public pages; one is a trap that breaks AI visibility entirely.
Server-Side Rendering (SSR)
The server runs the framework, generates the full HTML for the requested URL, and sends that HTML to the client. The browser displays the rendered page immediately, then a JavaScript bundle hydrates it for interactivity. Crawlers that read raw HTML see the complete page. Next.js with getServerSideProps or the App Router default, Nuxt 3 with universal mode, Remix, SvelteKit with server-side load functions, Angular Universal — all SSR. This is the safest choice for SEO. Cost: you pay for compute on every request.
Static Site Generation (SSG)
The framework pre-renders every page at build time, producing static HTML files served from CDN. No per-request compute, no rendering delay, no risk of server failure on indexing-critical pages. Astro by default, Next.js with getStaticProps, Gatsby, Hugo, 11ty, Lumina itself. This is the fastest option for SEO and the cheapest to host. Cost: every content change requires a rebuild and deploy. Not viable for sites with millions of dynamic URLs.
Incremental Static Regeneration (ISR)
SSG with a cache-revalidation policy. Each URL is statically generated on first request, then served from cache, then regenerated in the background when a revalidation timer expires. Combines SSG's performance with the ability to handle very large URL spaces. Next.js, Nuxt 3, Astro all support it. Excellent SEO behavior — crawlers always get cached HTML, never hit a slow first-render path.
Client-Side Rendering (CSR)
The server returns a near-empty HTML shell with a script tag. The browser downloads the JavaScript bundle, executes it, builds the DOM, fetches data from APIs, renders the content. This is what plain Create React App, Vite + React with no SSR layer, and most older Vue or Angular SPAs do. Avoid for any public-facing page that needs to rank. Googlebot can render it but with delay and risk; AI crawlers cannot render it at all.
Hybrid (per-route mode selection)
The framework decides per route which mode to use. Public marketing pages and blog posts get SSG or SSR; the authenticated dashboard gets CSR. Next.js App Router treats this as the default model. Astro's "islands" architecture takes it further — most of the page is static, with small JavaScript-hydrated regions where they're needed. Hybrid is what almost every modern site should be running.
| Mode | SEO Risk | AI Crawlers See | Best For |
|---|---|---|---|
| SSR | Low | Full content | Personalized public pages, content with auth states |
| SSG | Lowest | Full content | Marketing pages, blogs, docs, product pages |
| ISR | Low | Full content | Large catalogs, news sites, e-commerce |
| CSR | High | Empty shell | Authenticated dashboards, internal tools |
| Hybrid | Low (per route) | Per route | Most modern sites |
The 7 Most Common JavaScript SEO Problems
Every site I have audited in the past two years that had ranking issues caused by JavaScript fell into one of seven recurring patterns. Six are fixable in an afternoon by editing markup or routing config. One requires re-architecting the rendering pipeline. None of them are subtle once you know what to look for in the raw HTML.
1. Critical content rendered only after hydration. The server returns an empty shell. Title and meta tags exist (good), but the H1, body copy, and primary navigation only appear after React hydrates. In source view: nothing. In rendered DOM: everything. Googlebot will likely render this eventually; AI crawlers see nothing. The fix: switch the route to SSR or SSG. If migrating the whole stack is infeasible, hand-render the critical-path HTML server-side and let JavaScript take over for interactivity.
2. Internal links rendered as <div onclick> instead of <a href>. A startlingly common pattern in React apps. The element looks like a link, behaves like a link to humans, but a crawler that doesn't execute JavaScript sees a div with no destination. Even Googlebot's renderer can miss these because they don't fire any synthetic navigation event the crawler hooks into. Use real <a> tags with real href attributes for every internal navigation. React Router's <Link> renders an <a> by default; use it.
3. Lazy-loaded content gated by intersection observers. The renderer doesn't scroll. Content that only loads when the user scrolls into view never loads for the headless renderer. Image lazy-loading via loading="lazy" is fine — Googlebot handles it. Custom JavaScript that uses IntersectionObserver to load text content, comment threads, or product variants is not fine. Move that content into the initial render or into a click-triggered expansion.
4. noscript fallbacks that lie. The pattern: real content in JavaScript, a stripped-down "please enable JavaScript" message in <noscript>. Crawlers see the noscript and the JavaScript-rendered content. The two diverge over time. Either ship full content in the noscript fallback (defeating the SPA purpose) or remove the noscript block entirely and use SSR instead. The half-measure is the worst option.
5. Hash-fragment routing. Single-page apps that use #/page URLs instead of pushState routing. The fragment is never sent to the server in the request URL, so the server can't render different content per route, and crawlers don't index it as a separate page. Migrate to history-API routing with a server-side fallback that returns the right HTML for each path.
6. Redirects implemented in JavaScript. The page returns a 200 status code with a window.location = '/new-url' in a script tag. Browsers follow the redirect; crawlers may or may not. Even Googlebot prefers HTTP 301/302 redirects because they preserve the canonical signal cleanly. AI crawlers absolutely will not follow JavaScript redirects. Use server-side redirects whenever possible.
7. Heavy hydration that delays interactivity past Google's render budget. The page eventually renders fine, but the JavaScript bundle is so large that hydration takes 8 or 10 seconds. Googlebot's renderer may capture the page in a partial state, indexing only what was rendered before its budget ran out. The fix is the same as for Core Web Vitals: code-split, lazy-load non-critical JavaScript, ship less. Lumina's Core Web Vitals guide covers the practical patterns for getting bundle size under control.
Top 12 ranking pages for “javascript seo” on Google EN + DE: what they miss.
Ran Lumina's JS vs No-JS tool, Schema Validator, and Meta Tag Analyzer (Playwright + JS-rendered DOM) against the top 7 EN organic results (Impression Digital, Ahrefs, Sitebulb, SEMrush, BrightEdge, Contentful, Onely) and top 5 DE results (Seokratie, Seobility, diva-e, strategievier, abakus). The combined sample is what new readers most often land on.
dateModified on 2020-03-11. SEMrush at rank 5 sits at 1,146 days. Title says one year, schema says another.author: {@id:...} + publisher: {@id:...}. The rest ship inline duplicates that bloat schema and weaken entity linking for AI search.JavaScript SEO and AI Crawlers: The 2026 Frontier
AI crawlers do not execute JavaScript. GPTBot, ClaudeBot, PerplexityBot, and Google-Extended all fetch raw HTML and walk away — no headless browser, no rendering queue, no retry. This is the new constraint nobody on the SERP for JavaScript SEO is writing about, and it has changed the calculus of client-side rendering more than any Google update in years.
The new generation of crawlers — OpenAI's GPTBot, Anthropic's ClaudeBot, PerplexityBot, Google's Google-Extended (used to train Gemini, separate from the Googlebot used for Search) — do not execute JavaScript. They issue a GET request, read the raw HTML response, extract what they find, and move on. No headless browser. No rendering queue. No retry. If your content is in a JavaScript bundle that hydrates on the client, every AI search engine sees an empty shell.
The traffic numbers make this concrete. Vercel's network data, published in early 2025, showed GPTBot generating 569 million requests per month across the platform and ClaudeBot at 370 million. Both bots are reading static HTML on every one of those requests. A site that depends on client-side rendering hands those crawlers nothing — no product descriptions, no FAQ answers, no comparison tables, no review quotes. The page appears in the crawler's request log; the content doesn't appear in the model's training or retrieval pipeline.
This matters because the AI search interface is now a meaningful share of "how people find information." ChatGPT Search, Claude with web access, Perplexity, Gemini's grounded answers, Google AI Overviews — all are pulling from the open web in real time, and all are pulling from the raw HTML version of your site. The blue-link traffic from a Google ranking still exists and still matters. The AI-citation traffic is new, growing fast, and gated entirely on whether your content is in the raw HTML.
Test this yourself in 60 seconds.
Open view-source: on a key product or content page. Scroll. If you see your headline, your body copy, your prices, your reviews, your FAQ — AI crawlers see them too. If you see <div id="root"></div> followed by a script tag, AI crawlers see exactly what you see: nothing. Lumina's JS vs No-JS tool automates the same check at scale and shows the rendered-vs-raw word count side by side.
The fix path is the same one Google has recommended for years: server-side rendering or static rendering. The difference is that the cost-benefit analysis has shifted. CSR was always a Google indexing risk; in 2026 it is also an AI visibility extinction event. The same architectural change protects both.
Framework Cheat Sheet: Next.js, Nuxt, SvelteKit, Astro, Remix
Every modern JavaScript framework supports a rendering mode that is safe for SEO. The defaults vary, the API names vary, the ergonomics vary. What follows is a short reference: which mode each framework defaults to, the configuration switches that matter for SEO, and the single biggest mistake to avoid in each one.
Next.js
App Router (Next 13+) defaults to server components and server-side rendering. A page is server-rendered unless you explicitly mark it as client-only with the 'use client' directive. For static generation, export a generateStaticParams function. For ISR, set export const revalidate = 60 (seconds). The Pages Router still works and uses getStaticProps / getServerSideProps / getStaticPaths. Both routers ship full HTML to crawlers when configured correctly. The mistake to avoid: marking the entire app as 'use client' and accidentally turning Next into a CSR framework. Audit which components actually need client interactivity.
Nuxt 3
Universal mode (the default) is SSR. nuxt build produces a Node server that renders pages on demand. nuxt generate pre-renders all routes for static deployment. Hybrid rendering via the routeRules config in nuxt.config.ts lets you mix SSR, SSG, ISR, and SPA mode per route. The mistake to avoid: switching to spa mode globally because client-only auth flows were easier — that turns the entire site CSR.
SvelteKit
Server-side rendering by default. +page.server.ts runs on the server, +page.ts runs in both contexts. Static generation via the @sveltejs/adapter-static adapter. SvelteKit's server-first design makes accidentally shipping a CSR-only page hard, which is part of why it punches above its weight on SEO benchmarks.
Astro
Zero JavaScript by default. Each .astro file pre-renders to static HTML. Interactivity is opt-in via the "islands" model — drop a React, Vue, Svelte, or Solid component in with a client:load or client:visible directive and only that island ships JavaScript. The strongest framework default I have seen for content sites where SEO is the primary constraint. Lumina itself is plain HTML, not Astro, but Astro is the framework I recommend most often when a team is starting fresh on a marketing or content site.
Remix
Server-side rendering with progressive enhancement as the design philosophy. Every route ships full HTML. Forms work without JavaScript by default. Loaders run on the server. Remix's mental model is the closest to the original "web platform" model of any modern framework, which makes it a natural fit for SEO-critical work. The catch: it is less popular than Next.js, so the surrounding pool of SEO-aware libraries and tutorials is smaller.
What to avoid
Pure client-side React with Vite, Create React App (legacy but still in use), Vue 2 SPAs without Nuxt, Angular without Universal, any pre-2018 SPA setup that does not have an SSR or static path. These all ship empty shells. They are fine for authenticated dashboards. They are not fine for any page you want indexed or cited.
How to Test JavaScript SEO
Three checks cover most JavaScript SEO failures: a source-vs-rendered diff in the browser, a Google Search Console URL Inspection to see what Googlebot rendered, and a curl simulation as GPTBot or ClaudeBot to confirm what AI crawlers actually see. Run all three on every public page that matters before declaring a JavaScript stack SEO-ready.
1. Source vs rendered diff. Open the page, hit Ctrl-U (Cmd-U on Mac) to see the raw HTML. Then open DevTools' Elements panel to see the rendered DOM. The two should overlap heavily for any page you want indexed. If the raw HTML is missing your H1, your body copy, or your internal navigation links, the page is failing the basic test for AI crawlers and is at risk on Google. Lumina's JS vs No-JS tool automates this — paste a URL, get the raw word count, the rendered word count, and a word-by-word diff of what JavaScript added.
2. Google Search Console URL Inspection. Inside GSC, paste the URL in the inspection bar at the top, click "Test live URL," then "View tested page" → "HTML" tab. This is what Googlebot actually rendered, with whatever resources Google was willing to spend on your page. Compare against your local DevTools view. Differences are signal: scripts blocked by robots.txt, fonts that didn't load, third-party SDKs that timed out, lazy-loaded content the renderer didn't reach.
3. AI crawler simulation. Use curl -A "GPTBot/1.0" https://your-site.com/page to fetch as GPTBot. Inspect the response. That is exactly what OpenAI's training and retrieval pipeline saw. If the response is an empty shell, your AI visibility on this page is zero regardless of how Google ranks it. Do the same with curl -A "ClaudeBot" for Anthropic.
Three tools in addition to the basics:
- Lumina JS vs No-JS. Free. Paste a URL, see raw vs rendered word counts, framework detection, schema visibility comparison, AI-crawler readiness verdict. Built specifically for this audit class.
- Lumina Crawler Access Checker. Tests your
robots.txtagainst 36 search and AI crawlers including GPTBot, ClaudeBot, PerplexityBot, Google-Extended. Confirms whether the bot is even allowed to fetch — a surprisingly common own-goal. - Google Rich Results Test. Reports whether your structured data renders correctly after JavaScript runs. Works on URLs and on pasted HTML snippets. Free, no signup, runs in seconds.
For sites with hundreds or thousands of templates, batch testing matters. Lumina's tools support bulk URL input on the JS vs No-JS tool, and the output exports as CSV for sorting by render-gap percentage.
Dynamic Rendering: Why It Is Not the Answer in 2026
Dynamic rendering is the workaround where you serve different HTML to crawlers than to users — the user gets the JavaScript app, the crawler gets a pre-rendered version from a service like Prerender.io or Rendertron. It still works. Google still allows it. Three concrete reasons it should not be your choice for any new site in 2026.
It works. Google says so. Four of the twelve guides on the SERP for javascript seo still teach it as a current rendering mode without a clear caveat. There are three reasons I do not recommend it in 2026, and the discrepancy with the SERP is part of why.
Google explicitly discourages it. The official documentation now describes dynamic rendering as "a workaround and not a long-term solution for problems with JavaScript-generated content in search engines." The recommended path is server-side rendering, static rendering, or hydration. Google has not formally deprecated it — pages using dynamic rendering still get crawled — but the writing is on the wall for any new site.
It doubles your maintenance surface. You ship two versions of every page: the JavaScript SPA for users and the pre-rendered HTML for bots. Content changes have to propagate through both. Cache invalidation has to coordinate across both. Bugs in one version don't appear in the other, until they do, and the version your team uses to QA is not the version Googlebot reads. The number of "we shipped a bug to Google but not to users for three weeks" stories is large.
AI crawlers see whatever you show user agents. The user-agent split that dynamic rendering relies on for Googlebot can be configured for GPTBot, ClaudeBot, etc. — but most existing dynamic-rendering setups do not include the AI bots in their crawler list. The result: AI crawlers fall through to the user-agent path and see the empty SPA shell. You paid for a workaround that solves yesterday's problem and not today's. Migrating the site to SSR or static would have solved both at once.
If you have a dynamic-rendering setup running today and it is working, do not rip it out tomorrow. It is fine to leave in place. But the next time your team is picking a stack for a new site, do not reach for it.
FAQ
Where to Start
Five moves, in order. Fewer than 90 minutes of work for a small site, two to three hours for a medium one. Start by measuring the render gap on your most important pages, then migrate public-facing routes off pure CSR, fix the silent-killer bugs that catch nobody's attention, confirm AI crawlers can actually fetch you, and validate before every deploy.
Run Lumina's JS vs No-JS tool on your top 5 pages: homepage, two category pages, two articles or product pages. Most sites discover that the raw HTML is missing 30-90% of the rendered content. That number is your AI visibility gap.
JS vs No-JS →If you are on Next.js, Nuxt, or SvelteKit, this is a per-route config change. If you are on plain Create React App or pure Vite + React, plan a migration to a framework that does this natively.
Framework cheat sheet →Grep for window.location = and replace with server-side 301 redirects. Grep for <div onClick on navigation and replace with real <a href>. Both are silent SEO killers that take an hour to fix.
Run Lumina's Crawler Access Checker. Verify GPTBot, ClaudeBot, PerplexityBot, Google-Extended are all allowed in robots.txt. A blocked bot doesn't render or not-render your JavaScript — it sees nothing at all.
Crawler Access →Wire JS vs No-JS into your release checklist for any page change that touches rendering. A regression where a previously SSR page accidentally goes CSR is the most expensive class of SEO bug to discover six weeks later.
JS vs No-JS →See exactly what AI crawlers see on your site
Lumina's free JS vs No-JS tool runs the source-vs-rendered diff in seconds. Word counts, schema visibility, framework detection, AI crawler readiness verdict. One paste, no signup, bulk mode for site-wide audits.
Run the JS vs No-JS Tool →