Article 4 of the EU AI Act: What HR and Hiring Teams Must Do About AI Literacy Before August 2026

Article 4 of the EU AI Act mandates AI literacy for all providers and deployers of AI systems. Here's what HR and hiring teams must do before August 2026

12

mins


/

Article 4 of the EU AI Act: What HR and Hiring Teams Must Do About AI Literacy Before August 2026

The EU AI Act is live. Most of the noise focuses on high-risk AI systems and prohibited AI practices. But there's a provision buried in the general rules that hits HR harder than almost any other function. Article 4. The one about AI literacy.

It doesn't grab headlines the way biometric identification bans or systemic risk rules do. But if you manage people, hire people, or deploy AI anywhere in the employee lifecycle, this is the provision that will shape your compliance posture for everything that comes next.

Here's the problem. Most organizations hear "AI literacy" and reach for a webinar and a PDF. That's not what Article 4 requires. And the gap between what companies think they need and what regulators will expect is about to become very expensive.

Why Article 4 Matters More Than You Think

Article 4 doesn't create standalone fines. But when national competent authorities and market surveillance authorities investigate breaches of other AI Act obligations, the level of AI competence in your organization acts as a mitigating or aggravating factor. Non-compliance with high-risk AI systems requirements? Up to 15 million euros or 3% of global turnover. Violations of banned practices under Article 5? 35 million euros or 7%.

Picture this. Your AI-powered screening tool rejects a disproportionate number of candidates from a protected group. The regulator's first question won't be about the algorithm. It'll be: did the people operating this system understand how it works? Were they trained to spot bias in the results? Could they intervene?

If the answer is no, your penalty just got worse. You can't manage what you don't understand.

And there's a dimension here that goes beyond avoiding fines. Organizations that invest in genuine AI competence across their workforce make better decisions about which AI technologies to adopt and how to deploy them responsibly. Teams that understand AI's limitations don't waste budget on tools that underperform. Teams that understand its potential find applications that create real competitive advantage. As AI development accelerates and new tools enter the workplace quarterly, this competence gap only widens for organizations that don't measure and address it. Responsible AI development depends on people who can evaluate what these systems actually do in the real world. That's the AI strategy angle most compliance guides miss. Effective AI regulation depends on this foundation: if people understand the systems, they operate within requirements and adapt as guidance evolves.

What Article 4 Actually Requires

The text is deceptively short. Providers and deployers of AI systems must "take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf."

One sentence in the Artificial Intelligence Act that creates a workplace obligation for every organization using AI in the European Union. The European Parliament and the Council adopted this provision as part of Regulation (EU) 2024/1689, and it applies to providers, deployers, and service providers across the entire AI value chain. The regulation went through extensive amendments during its legislative journey, and Article 4 survived each round because legislators recognized that no technical safeguard works without competent people operating the systems.

The provision sits in Title II, which means it applies horizontally. Not just to high-risk AI systems in Annex III. Not just to providers of general-purpose AI models who face their own specialized obligations. Article 4 extends to every organization in the value chain, from the providers of general-purpose AI models at the top to the deployers integrating these models into workplace workflows. Every AI system. Every deployer. Every provider. If your organization uses AI tools for recruitment, performance management, workforce analytics, learning recommendations, or even drafting internal communications with generative AI, you're in scope.

Article 4 has applied since February 2, 2025, one of the earliest provisions to take effect after the regulation's adoption and subsequent amendments. National competent authorities and market surveillance authorities across member states will begin full enforcement from August 2026. Providers and deployers in third countries whose AI systems are used in the union market are equally bound.

Who counts as a provider and deployer?

Under the AI Act, providers are entities that develop AI systems or have them developed and place them on the market. Deployers are organizations using AI systems under their authority, which covers most employers. You can't assume the literacy duty rests solely with your vendor. If your team operates the system, configures it, or relies on its results for decision-making, you're a deployer.

Providers of high-risk AI systems carry additional obligations around conformity assessment, technical documentation, and EU declaration of conformity. But Article 4's competence mandate sits upstream of all of that. It applies to every provider and every deployer of AI systems, regardless of risk classification. The obligation covers the full AI value chain: from the providers building the systems to the deployers using them to the service providers helping implement them. Stakeholders across your organization, from procurement to operations, need to understand their role.

How the AI Act Defines the Competence Standard

Article 3(56) defines AI literacy as "skills, knowledge and understanding that allow providers, deployers and affected persons to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause." Recital 20 reinforces this, explaining that the regulation deliberately avoids prescribing a fixed curriculum and instead uses the open standard of a "sufficient level" so requirements can be tailored to organizational context.

Three words doing heavy lifting: skills, knowledge, understanding.

Skills means practical ability. Can the people operating your AI systems use them correctly? Can they tell a flawed recommendation from a sound one? Do they know when to override the system and when to trust it?

Knowledge means familiarity with concepts. Can your managers explain why an AI score isn't the same as objective truth? Can your compliance team connect GDPR requirements and data protection rules to the tools you've deployed? Do your technical teams understand what technical documentation and conformity assessment procedures actually require?

Understanding goes deeper. It means recognizing that every AI output carries the biases of its training data. That a CV screening algorithm trained on historical hiring data will replicate the biases of past decisions. That affected persons, your candidates and employees, experience these systems from the other side and have rights under both the AI Act and existing anti-discrimination law.

The European Commission's guidance makes this explicit: calibrate your program to the role, the risk level, and the specific AI systems people interact with.


Term

What It Means for HR and Hiring

AI literacy (Article 3(56))

Skills, knowledge, and understanding needed to use AI responsibly

Sufficient level of AI literacy

Proportionate to role, risk level, and the specific AI systems in use

Affected persons

Candidates, employees, and others impacted by AI-driven decisions

High-risk AI systems

Systems in areas like recruitment, selection, performance evaluation, and worker monitoring (Annex III, category 4)

Providers

Organizations developing or commissioning AI systems for the market

Deployers

Organizations using AI systems under their authority

Banned uses (Article 5)

Certain applications of AI prohibited outright, including workplace emotion recognition and biometric inference of sensitive traits

Why HR Faces a Double Obligation

Here's what makes this uniquely challenging for HR. You don't just face Article 4's general literacy duty. You're heading straight into the high-risk AI systems requirements under Annex III.

Annex III, category 4 covers "employment, workers management and access to self-employment." That includes AI systems intended for recruitment or selection of candidates (targeted job advertisements, CV parsing and filtering, candidate ranking, interview scoring), as well as systems for promotion decisions, termination, task allocation, performance monitoring and evaluation. Most AI used in HR for screening, ranking, scoring, or monitoring will qualify as high-risk.

The high-risk obligations start to apply on 2 August 2026. That means HR will face both the Article 4 literacy duties (already in force) and the high-risk deployment duties (risk management, data governance, technical documentation, transparency, logging, robustness, oversight requirements) simultaneously. Penalties for serious non-compliance reach up to 15 million euros or 3% of global turnover. Ongoing discussions under the Digital Omnibus package could adjust some timelines by linking them to harmonized standards availability, but these proposals aren't law yet. Work to the existing deadline.

This double obligation is precisely why AI competence across your HR function and your wider workforce can't be treated as a generic training exercise. It requires real measurement of who understands what, at every level.

Two Fronts: Existing Employees and New Hires

Article 4 creates obligations on two distinct fronts. Understanding the difference matters, because each demands a different response.

Front 1: HR teams need to assess and upskill existing employees

This is your immediate liability. Every person in your organization who already operates or uses AI systems on your behalf needs sufficient AI competence. Not "awareness." Not "they attended a session." Real proficiency, proportionate to their role and the risk level of the systems they interact with. These people are using your AI tools right now, today, and if they don't understand what they're doing, you're accumulating risk with every decision they make.

For HR teams, this means building a structured program that starts with measurement. You can't design effective training without knowing where people stand. Which departments over-trust AI recommendations? Which teams avoid the tools entirely because they don't understand them? Where are the dangerous gaps: people who are confident but wrong about how AI works?

The challenge is scale. A 500-person company might have 200 employees interacting with AI tools daily. A 5,000-person company might have 3,000. You can't baseline that through interviews or self-reported surveys. Self-assessment is notoriously unreliable when people don't know what they don't know. Ask someone if they understand AI bias and most will say yes. Put them in front of actual AI results and ask them to spot the problem? The numbers tell a different story.

What Article 4 demands is demonstrable, documented effort. That means objective measurement across your workforce, by role, by department, by the specific AI systems each group uses. Then targeted training designed against the gaps you actually found, not the gaps you assumed. Then reassessment to prove the training worked.

This is the cycle that turns compliance from a checkbox into a defensible program: measure, train, measure again. The delta between baseline and reassessment is your real-world evidence. It's what you show a regulator when they ask what measures you've taken.

Front 2: Hiring teams need to screen candidates for AI proficiency

If your organization uses AI systems in its operations, and most do now, then every new hire who'll interact with those systems needs to be capable of using them responsibly. Article 4 doesn't say "train people after you hire them." It says ensure sufficient AI competence among "staff and other persons dealing with the operation and use of AI systems." That includes the people you're about to bring in.

For hiring teams, this means AI proficiency should become part of how you evaluate candidates. Not as a nice-to-have. As a compliance-relevant competence. When you're hiring someone who'll operate, configure, or make decisions based on AI recommendations, you need to know whether they actually understand how those systems work, where they can fail, and when to question what the machine tells them.

Think about it practically. You're hiring a recruiter who'll use your AI screening tool daily. Or a marketing manager who'll work with predictive analytics. Or a finance analyst relying on AI-generated forecasts. Article 4 expects sufficient competence among the people dealing with these systems. If you hire someone who can't distinguish between a reliable AI recommendation and a hallucinated one, you've created a compliance gap on day one.

This isn't about testing whether candidates can code a neural network. It's about measuring whether they can work with AI in the real world: interpret results critically, recognize when something looks wrong, understand the boundaries of what these tools can and can't do, and make informed decisions about when to rely on the system versus their own judgment.

Validated, standardized assessments that measure AI proficiency during the hiring process give you two things Article 4 demands: confidence that new hires meet the competence bar for the AI systems they'll use, and documented evidence that you've taken measures to ensure competence before someone ever touches a system.

Where Bryq fits across both fronts

Whether you're evaluating candidates during recruitment or baselining your existing workforce, you face the same core problem: how do you objectively measure whether someone can actually work with AI? Not whether they think they can. Whether they actually can.

Bryq's AI Proficiency Assessment is built for exactly this. It measures how well people understand, evaluate, and interact with AI systems, scored against benchmarks tied to real workplace competencies. Use it during hiring to screen candidates for the AI proficiency their role demands. Use it across your organization to baseline where teams stand today, identify gaps by department and role, and track improvement after training. Each assessment cycle gives you documented, auditable evidence that you've taken the measures Article 4 requires, whether for new hires or existing staff.

Prohibited AI Practices HR Needs to Recognize

Separate from Article 4, Article 5 of the AI Act bans certain prohibited AI practices that present unacceptable risk. These bans have been in force since February 2025. Several hit HR directly.

Workplace emotion recognition. AI that attempts to infer candidates' or employees' emotions from facial expressions, voice patterns, or body language during interviews, assessments, or monitoring is explicitly prohibited, except for narrow medical or safety exceptions. If your video interview platform includes "sentiment analysis" or "engagement scoring," that feature is non-compliant. Disable it or replace the tool.

Biometric inference of sensitive traits. Tools that use biometric data to infer race, political opinions, trade union membership, religious beliefs, or sexual orientation are banned. These practices present a systemic risk to fundamental rights when deployed at scale, which is why the AI Act took the rare step of outright prohibition. This matters for any system processing facial images, voice data, or other biometric inputs in ways that go beyond simple identity verification.

Social scoring in the workplace. AI that combines behavioral or personal data from multiple contexts to rank employees or candidates and then denies opportunities based on that aggregated scoring could fall under the social scoring prohibition if the treatment is detrimental or disproportionate.

Untargeted facial image scraping. Suppliers relying on large-scale scraping of facial images to build recognition databases are themselves engaged in banned practices under Article 5. Using their products makes you complicit. This applies equally to providers of general-purpose AI models whose training pipelines may incorporate scraped biometric data.

Because these bans relate to uses of AI rather than just technical features, HR and procurement teams must be capable of recognizing when a vendor's product crosses a red line. The European Commission's guidelines are clear: deployers remain responsible for avoiding prohibited practices even where vendors implement their own safeguards. This makes internal awareness of these bans essential for stakeholders across procurement, HR, and IT.

Training should cover concrete real-world examples. That "emotional tone scoring" feature your vendor describes as an engagement metric? It could qualify as banned workplace emotion recognition. Your team needs to know that before they switch it on.

Human Oversight Only Works When People Understand the System

You can't have real transparency or meaningful oversight without the people using the system understanding it. That's why Article 4 underpins Articles 13 and 14, which deal with transparency obligations and human oversight. Official guidance explicitly links these provisions.

Think about what human oversight actually means in HR. Someone monitors AI results, interprets employee performance scores or candidate rankings, detects when results look off, and intervenes when something goes wrong. Deployers of high-risk AI systems have specific obligations around supervisory control, monitoring for serious incidents, and reporting to national competent authorities. None of that functions without proficient staff.

A person who doesn't understand the system they're overseeing isn't providing oversight. They're rubber-stamping an algorithm. And in high-risk AI systems used for employment decisions, that's the gap between a defensible process and a discrimination claim.

This is also where conformity assessment obligations connect back to Article 4: providers of high-risk AI systems must demonstrate, through technical documentation and applicable standards, that their systems can be effectively overseen. But effective oversight depends entirely on the people doing the overseeing.

AI proficiency, not just awareness, becomes the differentiator. Awareness tells you someone has heard of bias. Proficiency tells you they can spot it in a ranked list, question why the system flagged certain profiles, and make informed decisions about when to trust the algorithm and when to trust their own judgment. That's what you need to measure, in candidates and in your existing workforce alike.

Building an Article 4 Program That Covers Both Fronts

No prescribed curriculum. No mandatory certification. No exams. But regulators will ask for evidence, and "we didn't document it" isn't a defense.

Step 1: Inventory your AI systems across the employee lifecycle

Map every AI system in your organization that employees interact with. Not just HR tools. Include: sourcing and targeted advertising tools, CV parsing and screening systems, assessment platforms, interview scheduling and scoring tools, performance management and analytics systems, learning recommendation engines, employee monitoring tools, workforce planning models, and the generative AI assistants your people are already using to draft communications, analyze data, or automate tasks.

For each system, record whether it falls within Annex III category 4, relies on biometric data or emotion recognition, or could resemble a prohibited practice. Also clarify your functions: are you acting as a provider, a deployer, or both? Map which roles interact with which systems. This inventory anchors everything that follows.

Step 2: Baseline AI proficiency across your existing workforce

Before designing training, measure where your organization actually stands. This is the step most companies skip, and it's the one that makes or breaks your program.

Ask someone if they understand how AI works and most will say yes. Put them in front of actual AI recommendations, ask them to evaluate the result, spot where bias might have entered, and explain when they'd override the system? That's where the real picture emerges. The gap between confidence and competence is exactly what regulators audit.

You need validated, objective assessment data, by role, by department, by the specific systems each group uses. This baseline tells you where to invest training resources, which teams pose the highest compliance risk, and what your starting point looks like for the continuous improvement cycle Article 4 demands.

Bryq's AI Proficiency Assessment is designed for this. It measures how well people understand, evaluate, and work with AI, scored against benchmarks tied to real workplace competencies. Deploy it across departments to identify where your gaps actually sit. The results give you targeted data for training design and the documented evidence trail that compliance demands.

Step 3: Screen new hires for AI proficiency

Baselining existing employees is half the equation. The other half is ensuring every new hire meets the AI competence standard their role requires. Article 4's obligation covers "staff and other persons dealing with the operation and use of AI." That means your next recruiter, your next analyst, your next manager: if they'll work with AI, their proficiency matters from day one.

Build AI proficiency screening into your hiring process for roles that involve operating, configuring, or making decisions based on AI recommendations. This isn't about testing technical AI knowledge in every role. It's about measuring whether candidates can work responsibly with the specific types of AI systems they'll encounter: interpreting results critically, recognizing limitations, knowing when human judgment should override the machine.

Bryq's AI Proficiency Assessment works here too. The same validated instrument you use to baseline existing employees can be deployed during recruitment to screen candidates for the AI competence their role demands. You get a consistent measurement framework across both fronts, and documented evidence that you've taken measures to ensure AI competence before a new hire ever touches a system.

Step 4: Design role-based training against actual gaps

Generic awareness sessions satisfy no one. Build training modules tailored to the gaps your assessments revealed, not the gaps you assumed.

Frontline AI users (recruiters using screening tools, analysts using predictive models, customer-facing staff using chatbots) need to understand how their specific systems work, where bias enters, what outputs to trust and what to question, and when to escalate.

Managers and team leads need practical guidance on reviewing AI-assisted recommendations, understanding what a score actually represents, exercising independent judgment, and documenting reasons for overriding or accepting AI-influenced decisions.

HR leadership and compliance need training on AI governance responsibilities, vendor risk assessment (including understanding which vendors are providers of general-purpose AI models versus specialized tools), Annex III classification implications, and how to communicate AI use to works councils, unions, and employees in clear, non-technical terms.

Technical teams maintaining or configuring AI need training on data quality, cybersecurity, model monitoring for drift and degradation, technical documentation requirements, and conformity assessment procedures.

Training content should cover: how high-risk classification changes what your teams can do with employment AI; where bias, discrimination, and error risks sit; what privacy and cybersecurity rules apply; how to exercise effective oversight; how to report serious incidents; and concrete examples of prohibited practices (including that innocent-sounding "engagement score" on video interviews).

For providers of high-risk AI systems and providers of general-purpose AI models, training extends to conformity assessment procedures, harmonized standards, compliance protocols, and obligations around mitigation of systemic risk.

Step 5: Integrate with existing compliance frameworks

AI proficiency training doesn't need its own silo. Fold it into onboarding for new hires. Add it to your annual compliance refreshers alongside data protection and information security training. Update your acceptable-use policies to clarify expectations around the responsible use of AI systems. Make sure contractor and vendor agreements include AI competence clauses for external personnel working on your behalf.

Consultation with works councils and stakeholders matters, especially for high-risk or monitoring-intensive systems affecting working conditions. The AI Act's data governance requirements overlap with existing privacy obligations, so integration reduces duplication. Your AI training program should also connect to your existing anti-discrimination and equal opportunity frameworks. AI bias in employment decisions isn't a new problem. It's an old problem with a new delivery mechanism.

Step 6: Document everything

Training curricula and learning objectives per role. Assessment baselines and reassessment results. Attendance and completion records. Updates triggered by incidents, audits, or changes in deployed tools. Links between your AI competence measures and broader risk management, including DPIAs, fundamental rights impact assessments, and governance committee records.

The scientific panel and independent experts advising the AI Office have signaled that documented, continuous improvement will be a key factor in how authorities assess compliance. Recital 27 reinforces that these measures should be proportionate but demonstrable. As future amendments refine enforcement expectations, your documentation should show how incidents, complaints, or audits led to updates in training content. That feedback loop is your strongest evidence of a living program.

Step 7: Reassess regularly

Tools evolve. New features ship. Models degrade. Regulations get clarified through amendments, delegated acts, and updated guidance from the AI Office. Future amendments to the AI Act's annexes may adjust which systems qualify as high-risk, which means your program needs to adapt as the regulatory perimeter shifts.

Periodic reassessment of AI proficiency across your workforce, and continued screening of new hires, creates the evidence trail of continuous improvement that regulators want to see. Each cycle gives you fresh data on whether training is working, where new gaps have opened, and where your real-world risk sits. Bryq's assessment is designed for exactly this repeated measurement: consistent benchmarks over time, so you can track progress and demonstrate it.

What Codes of Practice, Sandboxes, and the AI Office Mean for Employers

The EU AI Act encourages codes of conduct and codes of practice to help organizations operationalize obligations like Article 4. The AI Office, supported by an advisory forum and independent experts, is coordinating these frameworks. Industry associations and member states are producing guidance that employers can adapt.

For SMEs and start-ups, the "best extent" language allows proportional implementation. You don't need a Fortune 500 training department. But you do need a deliberate, documented approach that shows you've thought about which systems your people use, what risks they carry, and how you're closing the knowledge gap.

Each AI regulatory sandbox offers a structured real-world environment to validate compliance approaches under regulatory supervision. For both providers of general-purpose AI models and deployers in employment, they function as learning environments where teams build proficiency while testing solutions before full deployment. As codes of practice mature and the advisory forum shapes implementation priorities, they'll likely establish benchmarks for what "sufficient" AI proficiency looks like across sectors. Early movers who build credible programs now will help shape those benchmarks rather than scrambling to meet them after future amendments lock in stricter expectations.

Every rule in this regulation assumes one thing: your people understand the systems they're deploying. Trustworthy AI isn't just an aspiration. It's the organising principle. From transparency to conformity assessment to technical documentation, every obligation depends on workforce competence to function.

Your Roadmap Before August 2026

Your enforcement window closes in August 2026 when market surveillance authorities across member states begin active oversight. Article 4 is already enforceable. Here's how to sequence the work.

This week: Map every AI system in your organization that employees interact with, including the ones teams adopted on their own. Check whether any qualify as high-risk under the employment category in Annex III. Flag anything that involves biometric data, real-time processing, emotion recognition, or automated decisions about candidates or workers. If you find tools with emotion-scoring or sentiment-analysis features, you may already be running a banned practice under Article 5. Address that first.

This month: Baseline your workforce's actual AI proficiency. Use a validated, scalable assessment that gives you objective data by role and department. This is what regulators will look for in an audit, and it's the foundation for everything else. At the same time, start integrating AI proficiency screening into your hiring process for roles that involve working with AI. Bryq's AI Proficiency Assessment is designed for both: measuring existing employees and evaluating candidates against the same validated benchmarks.

By Q3 2025: Lock in program ownership. Assign someone (or a cross-functional team) to own AI competence across the organization, connect it to your AI governance framework, and report on progress to your executive team. Launch initial role-based training designed against the gaps your baseline assessment revealed. Begin reviewing vendor contracts to ensure you're getting sufficient documentation, instructions for use, and compliance assurances. If available in your jurisdiction, consider participation in an AI regulatory sandbox to test your approach under guided conditions.

By Q1 2026: Run your first round of deeper, high-risk-specific training covering high-risk classification requirements, bias testing, transparency to employees and candidates, oversight protocols, and prohibited practices. Integrate this with your FRIA and DPIA processes if applicable. Complete your first reassessment cycle. Compare proficiency data against your initial baseline. Adjust training based on what worked, what didn't, and what's changed in your systems or the regulatory landscape (including any amendments to the AI Act's annexes or new guidance from the AI Office).

By August 2026: Have your full program operational, documented, and demonstrable. Assessment baselines and reassessment data for existing employees. AI proficiency screening integrated into hiring for relevant roles. Training curricula per role. Vendor documentation and oversight protocols. Incident response procedures. The mitigation argument you want in place well before any regulator asks.

The Bottom Line

The EU AI Act's AI literacy obligation isn't the flashiest part of the regulation. But for HR and hiring teams, it might be the most consequential.

Employment AI is explicitly high-risk. Several common workplace tools cross into prohibited territory. And the obligation to ensure AI competence applies to everyone who operates AI on your behalf: the people you've already hired and the people you're about to bring in.

Article 4 is forcing a question every organization should have been asking already: do our people, current and future, actually understand the AI systems they use to make decisions that affect other people's careers, performance, and livelihoods?

The organizations that answer "yes" with evidence will be the ones that thrive. Not just because they avoid penalties, but because teams with real AI proficiency make better decisions, build stronger experiences for employees and candidates, and use AI the way it was meant to be used: as an aid to human judgment, not a replacement for it.

Start by measuring where you stand. Across your workforce and in your hiring process. Build from there.

Your existing employees are using AI systems today. Your next hires will use them tomorrow. In both cases, Article 4 demands proof that they're competent. Bryq's AI Proficiency Assessment gives you that proof: validated, role-specific measurement of how well people understand, evaluate, and work with AI. Book a walkthrough to find out more about the AI Proficiency Assessment.


Author

Bryq is composed of a diverse team of HR experts, including I-O psychologists, data scientists, and seasoned HR professionals, all united by a shared passion for soft skills.

Ready to see Bryq in action?

Start hiring based on real data.

Ready to see Bryq in action?

Start hiring based on

real data.

Ready to see Bryq in action?

Start hiring based on real data.

TESTIMONIALS

Why our customers love Bryq

“Bryq expertly steered us through a transformative journey, helping us align our core cultural pillars and guiding principles with the essential traits necessary to attract and retain the best talent.”

Nick Jacks

Group Director of Talent

“Bryq streamlines the interview process by matching candidates to what matters, and gives me all the insight I need to evaluate them properly.”

Sigrid Shun

VP, HR Business Partner Lead

“Maybe my favourite part of using Bryq is helping uncover unique people we might not have even considered before...and watching them thrive.”

Rob Dougherty

SVP of Global Talent

TESTIMONIALS

Why our customers love Bryq

“Bryq expertly steered us through a transformative journey, helping us align our core cultural pillars and guiding principles with the essential traits necessary to attract and retain the best talent.”

Nick Jacks

Group Director of Talent

“Bryq streamlines the interview process by matching candidates to what matters, and gives me all the insight I need to evaluate them properly.”

Sigrid Shun

VP, HR Business Partner Lead

“Maybe my favourite part of using Bryq is helping uncover unique people we might not have even considered before...and watching them thrive.”

Rob Dougherty

SVP of Global Talent

TESTIMONIALS

Why our customers love Bryq

“Bryq expertly steered us through a transformative journey, helping us align our core cultural pillars and guiding principles with the essential traits necessary to attract and retain the best talent.”

Nick Jacks

Group Director of Talent

“Bryq streamlines the interview process by matching candidates to what matters, and gives me all the insight I need to evaluate them properly.”

Sigrid Shun

VP, HR Business Partner Lead

“Maybe my favourite part of using Bryq is helping uncover unique people we might not have even considered before...and watching them thrive.”

Rob Dougherty

SVP of Global Talent