Why Every Hiring Process Now Needs an AI Proficiency Assessment

75% of knowledge workers use AI daily. Yet most hiring processes have no way to measure AI skills in candidates. Here's why self-reported proficiency is noise — and what a real AI proficiency assessment looks like.

6

mins

Talent assessment


/

Why Every Hiring Process Now Needs an AI Proficiency Assessment

Here's a number that should concern every hiring leader: 75% of knowledge workers are already using AI at work. Most started in the last six months.

Adoption isn't the problem. The problem is that most companies have no AI proficiency assessment in their hiring process — no structured way to measure whether candidates can actually use AI well.

The gap between adoption and proficiency

AI at work has moved fast. Gallup data show that U.S. employee AI usage doubled in just two years — from 21% to 40%. McKinsey reports 65% of organizations now use generative AI regularly. ChatGPT alone shows up in 76% of offices worldwide.

But here's what's happening beneath the surface: AI proficiency varies enormously across the workforce. From beginners who treat ChatGPT like a search engine, to advanced users who've restructured entire workflows around AI-assisted analysis — the range is vast. Most hiring processes can't tell the difference.

The performance gap is real. MIT Sloan research shows that workers who accept AI output uncritically actually produce worse results than those who don't use AI at all. The tool isn't the differentiator. The skill of the person using it is.

Self-reported AI skills are noise

Microsoft and LinkedIn's 2024 Work Trend Index reported a 142x increase in members adding AI skills to their profiles. That makes "Experience with AI tools" on a CV approximately as informative as "Proficient in Microsoft Office" was in 2005.

When everyone claims the skill, the signal disappears. And unlike coding — where GitHub contributions or technical interviews provide observable evidence — AI readiness mostly shows up in internal tasks: sharper prompts, smarter workflows, higher-quality deliverables. It's invisible to a recruiter scanning a resume.

This is the core measurement problem. HR leaders know AI knowledge matters. Half of all employers cite lack of AI capabilities as their top barrier to AI adoption. But they have no reliable way to distinguish candidates who genuinely know how to work with AI systems from those who've watched a few YouTube tutorials.

What AI proficiency actually looks like

Research from UNESCO, the OECD, and multiple academic frameworks converges on a clear picture. AI proficiency isn't a single competency. It's a cluster of distinct capabilities spanning different functions — from how someone plans an AI-assisted task to how they critically evaluate what the AI gives back.

Think about the real-world use cases that separate a great AI user from a mediocre one. It's not whether they can write a prompt. It's whether they understand prompt engineering well enough to know what to ask for — and what to do with the answer. Can they tell when the LLM is confidently wrong? Do they know when to stop using AI entirely? Can they take AI-driven results and turn them into something a client or manager would actually trust?

These are genuinely different skills. Someone can be excellent at getting useful results from an LLM but terrible at spotting when those results contain fabricated data. Someone else might have strong instincts about AI ethics and privacy but struggle to integrate AI-powered automation into a productive workflow.

The UK AI Skills Framework, the SFIA framework, and Elsevier's AI Literacy Competency Model all validate that AI readiness is multi-dimensional. Any serious AI skills assessment needs to capture these different facets independently — not collapse them into a single "AI score."

How to measure AI skills — and why most approaches fail

Most organizations are responding to the AI skills gap in one of three ways. All three are inadequate.

Approach 1: "Add it to the job description." Listing "familiarity with AI systems" in a job posting is not assessment. It's a filter that filters nothing. Every candidate will claim it. You'll get the same applicant pool, just with an extra line on their CV.

Approach 2: "Ask about it in the interview." "Are you comfortable using AI?" is not a diagnostic question. It reveals self-confidence, not skill. The worst performers — the ones who accept hallucinated LLM output without checking — are often the most confident about their abilities.

Approach 3: "Use a knowledge quiz." Multiple-choice questions about artificial intelligence terminology tell you whether someone has read an article about machine learning. They tell you nothing about whether that person can apply practical problem solving with AI, catch a fabricated citation, navigate a privacy boundary, or optimize an AI-driven workflow that functions across a team.

The only way to benchmark AI readiness is to test what people actually do when they work with AI. That means a hands-on assessment test built around realistic scenarios: practical decisions, observable reasoning, real-world use cases. A clear methodology grounded in what people DO — not what they claim to know. Everything else is theatre.

Why this matters now, not later

The World Economic Forum projects that 39% of workers' core skills will change by 2030. 86% of companies expect AI to transform how they operate. Two-thirds plan to hire specifically for these capabilities.

But most hiring processes still screen for the same things they screened for in 2019. Interviews might include a vague question about whether candidates have used ChatGPT or other AI tools. That's not assessment. That's a checkbox.

Meanwhile, IBM reports that AI spending will exceed $550 billion while the AI talent gap sits at roughly 50%. Only 35% of employees have received any upskilling or AI training in the past year, despite 75% of companies rolling out AI-powered initiatives. The metrics are damning: massive investment in AI systems, minimal infrastructure for measuring the human AI capabilities needed to execute on it.

Real-time data from organizations that have formalized AI readiness measurement tells a consistent story: candidates who score well on scenario-based assessments ramp faster, adapt more readily to new AI systems, and use AI and automation more safely. Those hired on self-reported AI readiness alone rarely deliver.

Organizations that figure out how to reliably measure AI proficiency will build a genuine competitive advantage. Those that don't will keep hiring based on assumptions and hoping for the best.

What to look for in an AI proficiency assessment

If you're evaluating how to add measurement to your hiring process, here's what the research suggests:

  • Scenario-based, not knowledge-based. You're measuring practical application — what people actually DO with AI in real-world use cases — not abstract AI knowledge.

  • Multiple dimensions. AI proficiency isn't a single score. Someone can excel at prompt engineering but be poor at critically evaluating what LLMs produce.

  • Level-appropriate benchmarks. Beginners and senior leaders need different standards. A junior coordinator and a VP of Operations need vastly different AI capabilities. The assessment test should match the role.

  • Scientifically grounded. Built on validated frameworks and a clear methodology — not guesswork.

  • Tool-agnostic. AI systems change every few months. The assessment should measure durable technical skills — reasoning, judgment, ethics — not familiarity with a specific product.

  • Actionable output. The result should tell you what the candidate CAN DO at work across different functions — not just where they scored on a scale.

AI skill gaps are real, growing, and traditional hiring processes can't close them. The question for HR leaders isn't whether to measure AI proficiency. It's how soon you can start — and whether your hiring and upskilling initiatives are built on actual evidence rather than self-reported AI readiness.

Frequently Asked Questions

What is an AI proficiency assessment?
An AI proficiency assessment is a structured evaluation that measures how effectively someone can work with AI systems in real workplace contexts. Unlike a knowledge quiz, it tests practical application: can the person plan AI-assisted tasks, critically evaluate LLM outputs, apply prompt engineering, and navigate ethical boundaries? A well-designed assessment covers multiple dimensions of AI readiness — not just familiarity with tools.

How is AI proficiency different from AI literacy?
AI literacy typically refers to foundational understanding — knowing what artificial intelligence is, how it broadly works, and its societal implications. AI proficiency goes further: it measures whether someone can perform at a high level when using AI systems to complete real work. Think of it as the difference between knowing the rules of chess and being able to win a game.

Can't you assess AI skills in an interview?
Not reliably. Interview questions about AI comfort levels measure confidence, not competency. The problem is that low-skill AI users — those most likely to uncritically accept hallucinated outputs — are often the most confident about their abilities. A structured AI skills assessment test removes this bias by focusing on what candidates actually do in real-world use cases, not what they say.

What should an AI proficiency assessment include?
The research points to several non-negotiable elements: hands-on, scenario-based tasks (not multiple-choice questions), coverage across multiple dimensions including prompt engineering, critical evaluation, ethical reasoning, and workflow integration, plus level-appropriate benchmarks for different roles and a scientifically validated methodology. The output should describe what the candidate can actually do — not just where they ranked on a scale.

Do beginners perform poorly on AI proficiency assessments?
Not necessarily. Beginners who approach AI output with appropriate skepticism can outperform experienced users who've developed overconfidence in AI systems. The key variable isn't how long someone has used AI — it's how critically they engage with it. A well-designed assessment test will capture this, which is exactly why scenario-based methodology outperforms self-reporting.

Is AI proficiency the same as technical skills?
No. Technical skills relate to engineering, coding, or data science. AI proficiency covers how well anyone across any function — marketing, HR, operations, finance — can collaborate with AI systems to produce high-quality work. This distinction matters when designing an AI skills assessment for non-technical roles, which represent the vast majority of AI use cases in most organizations.

Author

Auria Heanley

Bryq is composed of a diverse team of HR experts, including I-O psychologists, data scientists, and seasoned HR professionals, all united by a shared passion for soft skills.

Book a demo

Ready to see Bryq in action?

Start hiring based on real data.

Ready to see Bryq in action?

Start hiring based on

real data.

Ready to see Bryq in action?

Start hiring based on real data.

TESTIMONIALS

Why our customers love Bryq

“Bryq expertly steered us through a transformative journey, helping us align our core cultural pillars and guiding principles with the essential traits necessary to attract and retain the best talent.”

Nick Jacks

Group Director of Talent

“Bryq streamlines the interview process by matching candidates to what matters, and gives me all the insight I need to evaluate them properly.”

Sigrid Shun

VP, HR Business Partner Lead

“Maybe my favourite part of using Bryq is helping uncover unique people we might not have even considered before...and watching them thrive.”

Rob Dougherty

SVP of Global Talent

TESTIMONIALS

Why our customers love Bryq

“Bryq expertly steered us through a transformative journey, helping us align our core cultural pillars and guiding principles with the essential traits necessary to attract and retain the best talent.”

Nick Jacks

Group Director of Talent

“Bryq streamlines the interview process by matching candidates to what matters, and gives me all the insight I need to evaluate them properly.”

Sigrid Shun

VP, HR Business Partner Lead

“Maybe my favourite part of using Bryq is helping uncover unique people we might not have even considered before...and watching them thrive.”

Rob Dougherty

SVP of Global Talent

TESTIMONIALS

Why our customers love Bryq

“Bryq expertly steered us through a transformative journey, helping us align our core cultural pillars and guiding principles with the essential traits necessary to attract and retain the best talent.”

Nick Jacks

Group Director of Talent

“Bryq streamlines the interview process by matching candidates to what matters, and gives me all the insight I need to evaluate them properly.”

Sigrid Shun

VP, HR Business Partner Lead

“Maybe my favourite part of using Bryq is helping uncover unique people we might not have even considered before...and watching them thrive.”

Rob Dougherty

SVP of Global Talent