5 Dimensions of AI Proficiency
Our assessment evaluates the complete lifecycle of working with AI, from knowing when to use it, through getting results, to delivering work you can stand behind.
AI Task Strategy
Knowing when to use AI — and when not to. Can they assess a task and decide what to delegate versus keep human?
Prompting & Interaction Quality
Getting results from AI tools. Can they write effective prompts and iterate when the first attempt doesn't land?
Critical Evaluation & Validation
Spotting what AI gets wrong. Can they catch errors, hallucinations, and gaps before those reach real work?
Ethical & Responsible Use
Using AI without creating risk. Do they understand privacy, bias, IP, and regulatory boundaries?
Workflow Integration & Output Quality
Turning AI output into professional work. Can they edit, synthesise, and combine AI content with original thinking?
Built for Hiring. Designed by Science.
Step 1: Choose the proficiency level
Select the AI proficiency level your role requires — Foundational, Functional, or Advanced. Each level has its own tailored scenarios matched to the AI competency the role demands.
Step 2: Candidates complete realistic scenarios
No trivia questions. No textbook definitions. Candidates work through realistic business scenarios that test how they actually approach AI-assisted tasks. The assessment takes approximately 15 minutes.
Step 3: Get dimension-level insights
Receive a detailed proficiency profile with scores across all five dimensions on a 0–100 scale. Understand exactly where each candidate excels and where they have gaps — not just a single pass/fail score.
Not Another AI Quiz
The market is full of AI assessments. Most test what candidates know about AI. We measure what they can actually do with it.
COMPETITORS
Developer-focused tools
Test ML engineering, model deployment, and coding skills. Great for hiring data scientists. Irrelevant for 95% of your workforce.
AI literacy quizzes
Test whether someone can define "hallucination." Knowing the vocabulary doesn't mean they can do the work.
Tool-specific assessments
Test proficiency with one product. That product updates next quarter. The skills become obsolete.
BRYQ
Bryq AI Proficiency Assessment
Measures how people actually work with AI in realistic scenarios. Tool-agnostic. Role-universal. Built on peer-reviewed research.

Built on Science,
Not Guesswork
Our competency framework synthesises six peer-reviewed research sources spanning international government frameworks, academic competency models, and policy-grade workforce research.
Each dimension is grounded in established AI literacy and digital skills scholarship — not invented in a product sprint.
The result: a measurement instrument built on the most rigorous AI proficiency research available, validated through established psychometric methods.

Match the Assessment to the Role
Aware
For roles where basic AI awareness is sufficient. Can the candidate use AI tools for simple tasks, understand their limitations, and avoid common pitfalls?
Functional
For roles where regular AI use is expected. Can the candidate effectively prompt AI tools, evaluate outputs critically, and integrate AI into standard workflows?
Advanced
For roles requiring sophisticated AI application. Can the candidate design complex AI workflows, evaluate AI opportunities strategically, and guide others in responsible adoption?
FAQ
Find answers to the most frequently asked questions about Bryq
Is this assessment specific to any AI tool like ChatGPT or Copilot?
No. The assessment is completely tool-agnostic. It measures transferable AI proficiency skills — how to think about AI tasks, evaluate AI outputs, and integrate AI into work — that apply regardless of which specific tool a candidate uses. AI tools change rapidly; the underlying skills we measure are durable.
Does it only work for technical roles or developers?
No. The assessment is designed for any role where AI proficiency matters — marketing, finance, operations, customer success, HR, and more. You select the appropriate proficiency level for the role, and the scenarios adjust accordingly.
What are the five dimensions you measure?
We assess AI Task Strategy (planning and delegation), Prompting & Interaction Quality (effective AI communication), Critical Evaluation & Validation (quality control of AI outputs), Ethical & Responsible Use (risk and compliance awareness), and Workflow Integration & Output Quality (turning AI assistance into professional deliverables).
How long does the assessment take?
Approximately 15 minutes. It's designed to be thorough but respectful of candidate time — much shorter than most technical assessments.
Is there a pass/fail score?
No. Candidates receive scores across each dimension on a 0–100 scale, providing a proficiency profile rather than a binary outcome. This helps you understand where someone excels and where they might need development.
What research is the assessment based on?
The framework synthesises six peer-reviewed research sources spanning international government AI competency frameworks, academic literacy models, and policy-grade workforce research. Each dimension is grounded in established competency models and validated through psychometric methods.
Can I use this for existing employees, not just candidates?
The assessment is designed for pre-hire evaluation, but the proficiency framework applies equally to internal talent development. Contact us to discuss workforce assessment use cases.
How does this relate to the EU AI Act?
Article 4 of the EU AI Act requires organisations deploying AI to ensure adequate AI literacy among relevant staff. This assessment provides a validated, documented way to measure and demonstrate that literacy — supporting compliance efforts.
Will the assessment become outdated as AI tools evolve?
Because we measure underlying proficiency skills (critical evaluation, ethical reasoning, task strategy) rather than knowledge of specific tools or interfaces, the assessment remains relevant as tools change. We continuously update scenarios to reflect current workplace practices while keeping the core framework stable.
How is scoring done?
Each of the five dimensions is scored independently on a 0–100 scale. Scores are level-specific — a score of 85 at Level 2 means excellent performance on Level 2 tasks. This gives you actionable insight into where a candidate stands relative to what the role requires.











