Measure Real AI Proficiency

Measure Real AI Proficiency

Bryq's AI Proficiency Assessment measures how people can actually work with AI through scenario-based tasks, not knowledge quizzes. Five dimensions scored 0-100: task strategy, prompting, critical evaluation, ethical use, and workflow integration.

Three proficiency levels matched to role requirements, whether you're assessing candidates or current employees.

Bryq's AI Proficiency Assessment measures how people can actually work with AI through scenario-based tasks, not knowledge quizzes. Five dimensions scored 0-100: task strategy, prompting, critical evaluation, ethical use, and workflow integration.

Three proficiency levels matched to role requirements, whether you're assessing candidates or current employees.

Bryq AI Proficiency hero image

5 Dimensions of AI Proficiency

Our assessment evaluates the complete lifecycle of working with AI, from knowing when to use it, through getting results, to delivering work you can stand behind.

  1. AI Task Strategy

Knowing when to use AI — and when not to. Can they assess a task and decide what to delegate versus keep human?

  1. Prompting & Interaction Quality

Getting results from AI tools. Can they write effective prompts and iterate when the first attempt doesn't land?

  1. Critical Evaluation & Validation

Spotting what AI gets wrong. Can they catch errors, hallucinations, and gaps before those reach real work?

  1. Ethical & Responsible Use

Using AI without creating risk. Do they understand privacy, bias, IP, and regulatory boundaries?

  1. Workflow Integration & Output Quality

Turning AI output into professional work. Can they edit, synthesise, and combine AI content with original thinking?

Built for Talent Decisions. Designed by Science.

Bryq test library card for AI Proficiency
Bryq test library card for AI Proficiency

Step 1: Choose the proficiency level

Pick the proficiency level the role requires: Foundational, Functional, or Advanced. Each level has its own scenarios matched to the AI competency expected, whether you're screening a new hire or benchmarking a current employee.

Step 2: Candidates complete realistic scenarios

No trivia questions. No textbook definitions. People work through realistic business scenarios that test how they actually approach AI-assisted tasks. Takes about 15 minutes.

A Bryq assessment question showing a realistic business scenario about AI model confidence scores
Icons representing Bryq assessment features including communication, evaluation, security, and collaboration
A Bryq assessment question showing a realistic business scenario about AI model confidence scores
Icons representing Bryq assessment features including communication, evaluation, security, and collaboration
Bryq dimension score card for Critical Evaluation
Bryq dimension score card for Prompting and Interaction Quality
Bryq dimension score card for Critical Evaluation
Bryq dimension score card for Prompting and Interaction Quality

Step 3: Get dimension-level insights

Get a detailed proficiency profile with scores across all five dimensions, 0 to 100. See exactly where each person excels and where the gaps are. No pass/fail binary. Use it to make a hiring call, plan targeted upskilling, or both.

Not Another AI Quiz

The market is full of AI assessments. Most test what candidates know about AI. We measure what they can actually do with it.

COMPETITORS

Developer-focused tools

Test ML engineering, model deployment, and coding skills. Great for hiring data scientists. Irrelevant for 95% of your workforce.

AI literacy quizzes

Test whether someone can define "hallucination." Knowing the vocabulary doesn't mean they can do the work.

Tool-specific assessments

Test proficiency with one product. That product updates next quarter. The skills become obsolete.

BRYQ

Bryq AI Proficiency Assessment

Measures how people actually work with AI in realistic scenarios. Tool-agnostic. Role-universal. Built on peer-reviewed research.

Three overlapping Bryq assessment question cards illustrating realistic business scenarios

Built on Science,
Not Guesswork

Our competency framework synthesises six peer-reviewed research sources spanning international government frameworks, academic competency models, and policy-grade workforce research.


Each dimension is grounded in established AI literacy and digital skills scholarship — not invented in a product sprint.

 

The result: a measurement instrument built on the most rigorous AI proficiency research available, validated through established psychometric methods.

Diagram showing the six research sources behind Bryq's competency framework

Match the Assessment to the Role

Works for pre-hire screening, internal benchmarking, and annual AI readiness audits.

One green star indicating the Aware proficiency level.
One green star indicating the Aware proficiency level.

Aware

For roles where basic AI awareness is sufficient. Can the candidate use AI tools for simple tasks, understand their limitations, and avoid common pitfalls?

Two green stars indicating the Functional proficiency level.
Two green stars indicating the Functional proficiency level.

Functional

For roles where regular AI use is expected. Can the candidate effectively prompt AI tools, evaluate outputs critically, and integrate AI into standard workflows?

Three green stars indicating the Advanced proficiency level.
Three green stars indicating the Advanced proficiency level.

Advanced

For roles requiring sophisticated AI application. Can the candidate design complex AI workflows, evaluate AI opportunities strategically, and guide others in responsible adoption?

Ready to see Bryq in action?

Start making talent decisions based on real data.

Ready to see Bryq in action?

Start hiring based on

real data.

Ready to see Bryq in action?

Start making talent decisions based on real data.

FAQ

Find answers to the most frequently asked questions about Bryq

Is this assessment specific to any AI tool like ChatGPT or Copilot?

No. The assessment is completely tool-agnostic. It measures transferable AI proficiency skills — how to think about AI tasks, evaluate AI outputs, and integrate AI into work — that apply regardless of which specific tool a candidate uses. AI tools change rapidly; the underlying skills we measure are durable.

Does it only work for technical roles or developers?

No. The assessment is designed for any role where AI proficiency matters — marketing, finance, operations, customer success, HR, and more. You select the appropriate proficiency level for the role, and the scenarios adjust accordingly.

What are the five dimensions you measure?

We assess AI Task Strategy (planning and delegation), Prompting & Interaction Quality (effective AI communication), Critical Evaluation & Validation (quality control of AI outputs), Ethical & Responsible Use (risk and compliance awareness), and Workflow Integration & Output Quality (turning AI assistance into professional deliverables).

How long does the assessment take?

Approximately 15 minutes. It's designed to be thorough but respectful of candidate time — much shorter than most technical assessments.

Is there a pass/fail score?

No. Candidates receive scores across each dimension on a 0–100 scale, providing a proficiency profile rather than a binary outcome. This helps you understand where someone excels and where they might need development.

What research is the assessment based on?

The framework synthesises six peer-reviewed research sources spanning international government AI competency frameworks, academic literacy models, and policy-grade workforce research. Each dimension is grounded in established competency models and validated through psychometric methods.

Can AI proficiency be assessed for current employees, not just job candidates?

Yes. AI proficiency is a skill set that applies regardless of whether someone is being evaluated for a new role or already employed. The same five dimensions (task strategy, prompting quality, critical evaluation, ethical use, and workflow integration) apply to any knowledge worker. Organisations assess current employees to identify AI upskilling gaps, inform L&D investments, support internal mobility, and meet requirements like EU AI Act Article 4.

How does this relate to the EU AI Act?

Article 4 of the EU AI Act requires organisations deploying AI to ensure adequate AI literacy among relevant staff. This assessment provides a validated, documented way to measure and demonstrate that literacy — supporting compliance efforts.

Will the assessment become outdated as AI tools evolve?

Because we measure underlying proficiency skills (critical evaluation, ethical reasoning, task strategy) rather than knowledge of specific tools or interfaces, the assessment remains relevant as tools change. We continuously update scenarios to reflect current workplace practices while keeping the core framework stable.

How is scoring done?

Each of the five dimensions is scored independently on a 0–100 scale. Scores are level-specific — a score of 85 at Level 2 means excellent performance on Level 2 tasks. This gives you actionable insight into where a candidate stands relative to what the role requires.