Skip to main content
All posts
Company Teardown17 March 2026·7 min read

What AI Gets Wrong About Working at Google UK

What AI Gets Wrong About Working at Google UK

You'd expect Google to have perfect AI employer visibility. After all, they literally build one of the AI models we test. If any company should have their AI employer narrative locked down, it's Google.

They don't.


The irony

Google's AI Overviews — the feature that summarises information directly in search results — gets Google's own employer brand partially wrong. When we asked Google's AI about working at Google UK, the response included outdated perks, incorrect location details, and a salary range that mixed US and UK data.

If Google can't control what AI says about Google, what chance does your company have?


What we tested

Three queries across ChatGPT, Perplexity, and Google AI Overviews:

  1. "What's it like to work at Google in London?"
  2. "What does Google UK pay software engineers?"
  3. "What benefits does Google UK offer?"

Salary: the transatlantic confusion

This is the most consistent problem across all AI models when describing FAANG employers.

ChatGPT's estimate for a Google L5 engineer in London: £120,000–£180,000 total compensation.

More accurate range for L5 in London (2025/26 data): £85,000–£110,000 base + £20,000–£40,000 equity + bonus. Total: £115,000–£165,000.

The ranges overlap — but ChatGPT's number was heavily influenced by US L5 compensation data ($200K–$300K+), which it inadequately adjusted for the UK market. The base salary suggestion of £120K+ is above what most London L5s actually earn in base.

Perplexity was more accurate, citing levels.fyi data and explicitly noting the UK vs US difference. But it quoted 2024 figures that didn't reflect recent compensation changes.

Google AI — amusingly — gave the most generic answer: "Google salaries are competitive and vary by role and location." Its own AI declined to give specific numbers about its own company.


The "Google perks" that no longer exist

All three AI models mentioned perks that Google has modified or discontinued at various points:

  • "Free gourmet meals three times a day" — Still offered in major offices, but the scope has changed at different locations over time
  • "20% time for personal projects" — This iconic policy has evolved significantly. It's not the same programme it was in 2008.
  • "Nap pods everywhere" — Not universal across all offices
  • "Free laundry service" — Available in some US offices, not standard in London

AI presents the Silicon Valley myth version of Google perks and applies it globally. A candidate joining Google London expecting the full Bay Area perk package would be adjusting expectations on day one.


The culture gap

AI's description of Google's culture had a notable lag:

  • "Move fast and ship products" — More accurate for 2015 than 2026. Google's culture has evolved considerably, particularly post-layoffs.
  • "Flat hierarchy where anyone can pitch ideas to leadership" — The reality at 180,000+ employees is more nuanced than this startup-era description suggests.
  • "Best work-life balance in tech" — Contested by recent internal surveys and online discussions.

The AI narrative is built from the era when Google was the undisputed "best place to work." The current reality — still very good, but more complex — doesn't match the AI mythology.


What Google's visibility score looks like

Despite being one of the world's most documented employers, Google UK's AI Visibility Score has specific weaknesses:

DimensionScoreNotes
Discoverability9/10Massive online presence
Salary accuracy5/10US/UK confusion, outdated data
Cultural representation6/10Oversimplified, 5+ year lag
Benefits completeness6/10Mythologised rather than accurate
Cross-platform consistency4/10Each model tells a different story

Overall: 60/100 — Good by absolute standards (average UK employer: 34), but poor relative to what Google could achieve with minimal effort.


Why even Google gets it wrong

Google's AI employer brand problems illustrate systemic issues:

1. Volume drowns signal

Google is mentioned in millions of web pages. AI models can't distinguish between a 2023 Reddit rant and Google's current careers page. The sheer volume of content — much of it outdated or US-centric — creates noise that AI can't filter.

2. No llms.txt

As of March 2026, google.com/llms.txt doesn't return an employer-specific file. Google has an AI-focused llms.txt for their developer platform, but nothing telling AI models about Google as an employer — culture, UK salaries, London-specific benefits.

3. The US/UK data split

Google's US employer brand content vastly outweighs its UK content. AI models trained on this imbalanced corpus default to US assumptions about compensation, perks, and culture.

4. Mythology vs reality

Google's early-stage culture became internet legend. "Free food, nap pods, 20% time" is such a strong cultural meme that AI treats it as current fact even when reality has evolved.


Lessons for every employer

If Google — with infinite engineering resources and control over one of the AI models — can't get their AI employer brand right, the lesson is clear:

  1. AI employer visibility doesn't fix itself. Even massive online presence doesn't guarantee accuracy.
  2. Machine-readable data beats volume. A small company with llms.txt and structured data will have better AI accuracy than a global brand without it.
  3. Geographic specificity matters. If you have UK and US operations, AI will blend them unless you explicitly separate the data.
  4. Mythology persists. Whatever your company was known for 5 years ago is what AI will say today — unless you actively update the narrative.

See what AI says about you

If Google's AI employer brand has gaps, yours almost certainly does too. Run a free audit and find out.

30 seconds. No signup.

→ Run your free AI employer brand audit


Sources: OpenRole audit data (March 2026), levels.fyi UK compensation data, ChatGPT-4o, Perplexity, Google AI Overviews.