On March 20, 2026, Google quietly added a new entry to its crawler documentation: Google-Agent. Unlike traditional crawlers that index content in the background, Google-Agent represents something fundamentally different — AI agents that navigate the web and take actions on behalf of users.

This is not another indexing bot. This is the beginning of the agentic web.

What is Google-Agent?

Google-Agent is a user-agent string for AI agents hosted on Google infrastructure. These agents browse websites, click buttons, fill forms, and complete tasks — all directed by a human user. The first product using this technology is Project Mariner, Google's AI research prototype that operates within Chrome.

Think of it this way: when someone asks an AI assistant to book a flight, compare prices, or fill out a form, that AI needs to visit websites and interact with them. Google-Agent is the identity it uses when doing so.

Key technical details

  • User-Agent string: Google-Agent
  • IP ranges: Published in Google's user-triggered-agents.json
  • Rollout: Started March 20, 2026, full deployment over weeks
  • Nature: User-triggered (not automated crawling)

How it differs from Googlebot and Google-Extended

The Google crawler ecosystem now has three distinct agents with very different purposes:

The critical difference: Google-Agent represents a real user's intent. Blocking it could mean blocking potential customers whose AI assistant is trying to interact with your business.

What this means for your website

The implications depend on your business model:

E-commerce and service businesses

If users send AI agents to compare your prices, check availability, or start a booking — you want to be accessible. Blocking Google-Agent could mean losing conversions to competitors whose sites work with AI agents.

Content publishers

AI agents might read your articles on behalf of users doing research. This is different from AI training — the agent is serving one specific user, similar to a browser extension reading content aloud.

Privacy-sensitive sites

If your site handles sensitive data (banking, healthcare), you may want to evaluate whether AI agent interactions align with your security requirements. Google-Agent can be controlled via robots.txt.

How to prepare your site

Here are three concrete steps you should take now:

1. Check your robots.txt

Make sure you're not accidentally blocking Google-Agent through a broad wildcard rule. If your robots.txt blocks all bots except Googlebot, Google-Agent might be caught in that block.

# Allow Google-Agent (recommended for most sites)
User-agent: Google-Agent
Allow: /

# Block if needed (for sensitive areas)
User-agent: Google-Agent
Disallow: /admin/
Disallow: /api/

Use the Lumina Robots.txt Analyzer to check your current configuration.

2. Monitor your server logs

Look for the Google-Agent user-agent string in your access logs. Volume will be low initially, but it will grow as Google rolls out Project Mariner more broadly.

3. Review your WAF/CDN rules

If you use Cloudflare, AWS WAF, or similar services, verify that your bot protection rules don't block Google-Agent. Google publishes its IP ranges so you can allowlist them.

The bigger picture: the agentic web

Google-Agent is not an isolated development. OpenAI has ChatGPT-User and OAI-SearchBot. Anthropic has Claude-User. The pattern is clear: AI companies are building agents that browse the web on behalf of users.

This creates a new category of web traffic that sits between traditional bots and human visitors. Websites that are optimized for these interactions will have an advantage — both in AI-powered search results and in direct user tasks.

This is exactly what Generative Engine Optimization (GEO) addresses: preparing your web presence not just for search engines, but for AI systems that actively interact with your content.

Check your AI crawler access

Lumina checks Google-Agent and 23 other AI crawlers. See which bots can access your site — in one click.

Get Lumina — Free