AI Salary Accuracy Report: What ChatGPT Gets Wrong About UK Pay
AI Salary Accuracy Report: What ChatGPT Gets Wrong About UK Pay
"How much does [company] pay for [role]?" is the #1 candidate query to AI. It accounts for 34% of all employer-related AI searches.
We tested how accurately AI answers this question — across 200 UK companies, 5 AI models, and multiple seniority levels. The results should concern every employer.
Headline findings
| Metric | Result |
|---|---|
| Companies tested | 200 |
| AI models | ChatGPT, Perplexity, Google AI, Claude, Copilot |
| Salary comparisons made | 3,000+ |
| Average error | £18,400 |
| Underestimates | 62% (median: £18K below actual) |
| Overestimates | 16% |
| Accurate (within ±£5K) | 22% |
The typical AI salary answer is £18,400 below reality. That means the majority of candidates researching employers via AI are being told they'll earn significantly less than they actually would.
Model-by-model accuracy
Not all AI models are equally (in)accurate:
ChatGPT (GPT-4o)
- Accuracy rate: 18% within ±£5K
- Average error: £21,200
- Bias: Strong underestimate (68% of responses were low)
- Style: Most confident — gives specific ranges with no disclaimers
- Problem: Confidence without accuracy is the worst combination. Candidates trust the number because it sounds authoritative.
Perplexity
- Accuracy rate: 26% within ±£5K
- Average error: £15,800
- Bias: Moderate underestimate (58% low)
- Style: Cites sources, which adds credibility — but sources are often outdated
- Problem: Transparent sourcing of stale data. The 2023 salary survey it cites was accurate then, not now.
Google AI Overviews
- Accuracy rate: 24% within ±£5K
- Average error: £16,400
- Bias: Mixed (52% under, 22% over)
- Style: Hedges with "varies by experience and location"
- Problem: Pulls from Google's job listing data, which is better but still gaps in sectors with low listing transparency.
Claude
- Accuracy rate: 22% within ±£5K
- Average error: £17,600
- Bias: Tends to give wider ranges, which sometimes captures actual salary
- Style: More cautious, often adds caveats about data limitations
- Problem: Wider ranges are more honest but less useful to candidates making decisions.
Microsoft Copilot
- Accuracy rate: 20% within ±£5K
- Average error: £19,400
- Bias: Strong underestimate (65% low)
- Style: Often pulls from LinkedIn data, which should help but doesn't consistently
- Problem: LinkedIn salary data is self-reported and often outdated.
Industry breakdown
| Industry | Avg error | % Accurate | Direction |
|---|---|---|---|
| Technology | £14,200 | 28% | Mixed (slightly under) |
| Financial services | £16,800 | 24% | Under by 15-20% typically |
| Healthcare (NHS) | £8,400 | 38% | Most accurate (public pay scales help) |
| Professional services | £22,600 | 16% | Significantly under |
| Retail | £12,200 | 30% | Under for management, accurate for hourly |
| Manufacturing | £24,800 | 12% | Severely under — AI barely knows these salaries |
| Construction | £26,400 | 10% | Worst accuracy — AI defaults to outdated averages |
| Hospitality | £9,800 | 34% | Relatively accurate (less variation in pay) |
Key insight: Industries with public pay data (NHS, civil service) or high online salary discussion (tech) have better accuracy. Industries where pay is less discussed online (manufacturing, construction, professional services) have severe accuracy problems.
Seniority breakdown
| Level | Avg error | Direction |
|---|---|---|
| Graduate / Entry | £4,200 | Slight underestimate |
| Mid-level (3-5 years) | £12,800 | Moderate underestimate |
| Senior (5-10 years) | £22,400 | Significant underestimate |
| Lead / Principal | £28,600 | Severe underestimate |
| Director+ | £35,000+ | Often wildly inaccurate |
The pattern is clear: AI accuracy degrades as seniority increases. Entry-level salaries are relatively well-documented online. Senior and leadership compensation is more opaque — and AI's guesses get worse.
This is particularly damaging for employers trying to hire senior talent. The candidates with the most options — and the least tolerance for wasted time — are the ones getting the most inaccurate salary information.
Why AI gets salaries wrong
1. Source data is blocked
The platforms that hold actual salary data — traditional review sites, salary aggregators, professional networks — block AI crawlers. Their data is proprietary and commercially valuable. The unintended consequence: AI can't access the best salary data that exists.
2. Training data is stale
ChatGPT's knowledge cutoff means salary data from its training set reflects pre-cutoff compensation. Even with web search enabled, models anchor to training data and adjust insufficiently.
3. Geographic confusion
AI models frequently mix US and UK salary data. A "senior engineer" query might partially reference US figures (typically 40-60% higher than UK equivalents) and blend them into the estimate.
4. Averaging across the wrong dataset
When AI gives a salary range, it's often averaging across all mentions of that role on the open web — including freelance rates, part-time roles, different countries, and outdated figures. The resulting "average" bears little resemblance to current full-time UK compensation.
5. No access to first-party data
Most employers don't publish salary ranges. Without first-party data, AI has no authoritative source to anchor its estimate. It's forced to triangulate from partial, outdated, and often inaccurate signals.
The fix: publish your data
The relationship between salary data publication and AI accuracy is stark:
| Data published | AI accuracy (within ±£5K) | Avg error |
|---|---|---|
| Full salary ranges on listings + llms.txt | 84% | £3,200 |
| Ranges on some listings | 52% | £9,400 |
| Ranges in llms.txt only | 48% | £11,200 |
| No published data | 22% | £18,400 |
Companies that publish comprehensive salary data across both job listings and llms.txt see their AI accuracy jump from 22% to 84%. The error drops from £18,400 to £3,200. That's the difference between candidates being repelled by wrong numbers and attracted by accurate ones.
The cost calculation
For a company with 50 open roles:
- If AI underestimates salaries by £18K on average
- And 15% of candidates use AI as their primary research tool
- And 10% of those candidates are deterred by the low estimates
- That's potentially hundreds of qualified candidates lost per year
The cost of each lost candidate: Recruiter time to source a replacement (£2,000-£5,000), extended vacancy cost (£500-£1,000 per week), and the opportunity cost of potentially losing your best candidate to a competitor that AI happens to describe more accurately.
Recommendations
For employers:
- Publish salary ranges — on job listings, in llms.txt, on your careers page. The more places, the more AI models pick it up.
- Include total compensation — pension, bonus, equity, benefits value. AI responds better to comprehensive data.
- Update annually — stale published ranges become AI's stale answers.
- Monitor — check what AI actually says about your compensation every quarter.
For candidates:
- Don't trust AI salary estimates — especially for senior roles and industries outside tech.
- Cross-reference — check multiple AI models and published company data.
- Ask in interviews — AI salary data is too unreliable to use as a basis for expectations.
Get your salary accuracy score
Run a free audit and see exactly what AI tells candidates about your compensation — and how far off it is.
30 seconds. No signup.
→ Run your free AI employer brand audit
Sources: OpenRole salary accuracy study (Feb-Mar 2026), 3,000+ salary comparisons across 200 UK employers and 5 AI platforms. Verification: ONS Annual Survey of Hours and Earnings 2025, published job listing data, employer-verified ranges.