AI is hallucinating your salary data — here's proof
"How much does [company] pay for a senior engineer?" is one of the most common questions candidates ask AI. We tested what ChatGPT, Google AI, and Perplexity actually answer — and compared it to real compensation data from job listings and verified employer sources.
The results are worse than we expected.
The methodology
We asked three AI models the same salary question for 200 UK companies across technology, finance, healthcare, and professional services. We then compared the AI's estimates against:
- Published salary ranges on active job listings
- Employer-verified data (where available)
- ONS and industry salary surveys for the role and sector
The headline numbers
78% of salary estimates were inaccurate by more than £5,000
62% underestimated actual salaries — most by £15,000-£25,000
16% overestimated, often by £10,000+
22% were within an acceptable ±£5K range
Average deviation: £18,400 from actual compensation
Why AI gets salaries wrong
The root cause is data access. The platforms that hold real salary data — Traditional salary platforms, job boards, LinkedIn, Levels.fyi — all block AI crawlers via robots.txt. AI models are forced to estimate from:
Outdated training data
ChatGPT's training data has a knowledge cutoff. Salary information from 2023-2024 training data doesn't reflect 2026 market rates, especially in fast-moving sectors like technology.
Reddit and forum estimates
Without access to salary platforms, AI falls back on Reddit discussions, blog posts, and forum threads — where salary mentions are anecdotal, often outdated, and frequently inaccurate.
Geographic confusion
AI models often mix US and UK salary data. A "senior engineer" query might pull US figures (typically higher) and present them as UK salaries, or vice versa.
The real cost to employers
When AI underestimates your salaries — which happens 62% of the time — candidates conclude you don't pay competitively. They move on to companies where AI reports higher compensation. You never see the application. You never get the chance to correct the record.
When AI overestimates, you attract candidates with inflated expectations. The salary conversation in the offer stage becomes adversarial. Either way, AI salary hallucinations cost you candidates and time.
Example: A mid-size London fintech we audited pays senior engineers £85K-£110K. ChatGPT estimated £55K-£68K. That £30K understatement likely deterred dozens of qualified candidates who assumed the company couldn't compete on compensation.
Model-by-model comparison
Not all AI models are equally inaccurate:
ChatGPT: Most confident, least accurate. Often gives specific ranges that are £20K+ off. Cites "data from various sources" without naming them.
Google AI: More cautious, adds disclaimers. Pulls from Google's own job listing data but still misses employer-specific ranges. Average deviation: £14K.
Perplexity: Most transparent about sources — cites specific pages. But those sources are often outdated job listings or Reddit threads. Average deviation: £16K.
The fix: publish your own data
The solution is straightforward: give AI accurate salary data to cite. Companies that publish salary ranges in machine-readable formats — JSON-LD on job listings, llms.txt files, structured careers pages — see significantly more accurate AI representations.
In our audit, companies with published salary data had an average AI salary deviation of just £3,200 — compared to £18,400 for those without.
Is AI getting your salaries right?
Check what AI tells candidates about your compensation. Free, instant.
Run your free auditSources
- OpenRole audit data — 200 UK employers across 4 sectors (Feb 2026)
- ONS Annual Survey of Hours and Earnings (2025)
- Major salary platform and job board robots.txt files — verified Feb 2026
- Profound — LLM citation analysis (2025)