You are using the web browser we don't support. Please upgrade or use a different browser to improve your experience.

AI & Employment 59% of Organisations Made a Bad AI Hire in the Past Year, New TestGorilla Research Reveals

11/05/2026

AI & Employment 59% of Organisations Made a Bad AI Hire in the Past Year, New TestGorilla Research Reveals

TestGorilla released new research, The State of Hiring for AI Fluency, revealing a fundamental shift in how organisations evaluate talent: AI fluency has overtaken traditional domain expertise as the top hiring priority. 53% of hiring managers now prefer candidates with strong AI fluency over those with deep subject matter expertise and expose a significant gap between hiring ambition and hiring reality.

Despite the majority of organisations having formally defined AI fluency (72% in the UK, 71% in the US) and nearly all listing it as a formal hiring requirement, 59% of them across both markets still report making a bad AI hire in the past year: a candidate who spoke the language fluently in the interview but could not apply it once through the door. 

Wouter Durville, CEO at TestGorilla says, “Organisations are no longer just looking for subject matter experts; they are looking for AI-augmented performers who can use emerging technology to 10x their output. But in the current market, a candidate can learn the vocabulary of AI terms, like ‘agentic workflows,’ ‘RAG,’ and ‘prompt chaining’ in a single weekend. They can describe a workflow convincingly without ever having built one.” 

The Infrastructure Paradox: Why AI Hiring Processes Are Failing 

The research identifies an “Infrastructure Paradox” at the heart of the problem. Organisations are investing in AI hiring frameworks, but those frameworks rely on the same broken proxies that have failed recruiters for decades. 

The report highlights three critical issues that modern AI hiring recruitment processes are finding: 

The awareness trap: 37% of organisations set their minimum bar at tool awareness, simply knowing a tool exists. 
The subjectivity trap: 19% of organisations leave AI assessment entirely to the individual discretion of hiring managers. Without a shared rubric, fluency becomes a subjective vibe-check that rewards the best storyteller, not the best hire. 
Confidence vs. competence: Today’s interviews are designed to observe communication, not execution. Candidates can speak fluently, putting AI workflows into practice without ever having audited an output or redesigned a workflow.
The implications for companies hiring today are immediate and measurable. A bad AI hire, one who performed fluency in the interview but couldn’t execute in the role, could cost more to fix than a vacancy in lost output, failed projects, and the time and money to rehire. 

The data also reveals a sharp divergence in how these tensions play out globally. 33% of US organisations report frequent AI-driven errors, compared to just 13% in the UK. UK employers are also significantly less likely to set the bar at mere tool awareness (29% vs. 45% in the US) and show stronger internal alignment on what AI fluency actually requires for a given role, requiring independent use and verification of AI skills. 

Both point to the same conclusion: subjective evaluation methods are no longer fit for purpose, and objective, skills-based assessment is the only reliable path to verifying AI competence in practice. 

Original Article: HR News

Are you an employer or organisation that needs to hire talent in Wales? Contact our digital recruitment specialist Gareth Allison on 02920 628808

Connecting talent to opportunity in a competitive market. Browse for the top companies hiring near you.