Bryq vs TestGorillacomparison page hero with pre-employment assessment headline

EU AI Act Article 4 Compliance for Hiring Teams

Article 4 is already law. Here's what it means for the people in your hiring workflow, what auditors will look for, and how to put a defensible AI literacy programme in place before Annex III enforcement starts.

Bryq vs TestGorillacomparison page hero with pre-employment assessment headline

EU AI Act Article 4 Compliance for Hiring Teams

Article 4 is already law. Here's what it means for the people in your hiring workflow, what auditors will look for, and how to put a defensible AI literacy programme in place before Annex III enforcement starts.

Bryq vs TestGorillacomparison page hero with pre-employment assessment headline

EU AI Act Article 4 Compliance for Hiring Teams

Article 4 is already law. Here's what it means for the people in your hiring workflow, what auditors will look for, and how to put a defensible AI literacy programme in place before Annex III enforcement starts.

What Article 4 actually requires

What Article 4 actually requires

Article 4 of Regulation (EU) 2024/1689 says any company using AI in the EU must "ensure, to their best extent, a sufficient level of AI literacy" among the staff and contractors who operate those systems. Not just vendors. Not just engineering. Anyone on your side of the fence who touches an AI tool.


For hiring, that pulls in recruiters, hiring managers, HR business partners, and the external sourcing agencies you've outsourced work to. If they use AI to write a job description, screen a CV, score an assessment, or shortlist a candidate, they need to understand what the tool does, where it can go wrong, and when to override it. The Commission's AI Office has confirmed this scope extends to people working for service providers, "generally need[ing] the appropriate AI skills to fulfil the task in question."


The Commission has also been clear that there is no certification scheme, no mandatory training programme, and no one-size-fits-all metric for what counts as "sufficient." The standard scales with the role, the risk of the tool, and the people affected by its output. That sounds vague, but it has teeth. Under Article 99, a literacy failure can be used as an aggravating factor in fines that reach up to €15 million or 3% of global annual turnover when something else goes wrong with a high-risk hiring tool.

Article 4 of Regulation (EU) 2024/1689 says any company using AI in the EU must "ensure, to their best extent, a sufficient level of AI literacy" among the staff and contractors who operate those systems. Not just vendors. Not just engineering. Anyone on your side of the fence who touches an AI tool.


For hiring, that pulls in recruiters, hiring managers, HR business partners, and the external sourcing agencies you've outsourced work to. If they use AI to write a job description, screen a CV, score an assessment, or shortlist a candidate, they need to understand what the tool does, where it can go wrong, and when to override it. The Commission's AI Office has confirmed this scope extends to people working for service providers, "generally need[ing] the appropriate AI skills to fulfil the task in question."


The Commission has also been clear that there is no certification scheme, no mandatory training programme, and no one-size-fits-all metric for what counts as "sufficient." The standard scales with the role, the risk of the tool, and the people affected by its output. That sounds vague, but it has teeth. Under Article 99, a literacy failure can be used as an aggravating factor in fines that reach up to €15 million or 3% of global annual turnover when something else goes wrong with a high-risk hiring tool.

Article 4 of Regulation (EU) 2024/1689 says any company using AI in the EU must "ensure, to their best extent, a sufficient level of AI literacy" among the staff and contractors who operate those systems. Not just vendors. Not just engineering. Anyone on your side of the fence who touches an AI tool.


For hiring, that pulls in recruiters, hiring managers, HR business partners, and the external sourcing agencies you've outsourced work to. If they use AI to write a job description, screen a CV, score an assessment, or shortlist a candidate, they need to understand what the tool does, where it can go wrong, and when to override it. The Commission's AI Office has confirmed this scope extends to people working for service providers, "generally need[ing] the appropriate AI skills to fulfil the task in question."


The Commission has also been clear that there is no certification scheme, no mandatory training programme, and no one-size-fits-all metric for what counts as "sufficient." The standard scales with the role, the risk of the tool, and the people affected by its output. That sounds vague, but it has teeth. Under Article 99, a literacy failure can be used as an aggravating factor in fines that reach up to €15 million or 3% of global annual turnover when something else goes wrong with a high-risk hiring tool.

The timeline: where things actually stand in May 2026

The timeline: where things actually stand in May 2026

Three dates matter. The status of each is different from what you may have read six months ago.

Three dates matter. The status of each is different from what you may have read six months ago.

Three dates matter. The status of each is different from what you may have read six months ago.

2 February 2025. Article 4 is already in force.

2 February 2025. Article 4 is already in force.

AI literacy obligations apply to every provider and deployer of AI in the EU, regardless of the risk category of the tool. There is no transition period. If you have AI in hiring today, this duty is on you today. (Source: AI Act Service Desk, Article 4 timeline page)

AI literacy obligations apply to every provider and deployer of AI in the EU, regardless of the risk category of the tool. There is no transition period. If you have AI in hiring today, this duty is on you today. (Source: AI Act Service Desk, Article 4 timeline page)

AI literacy obligations apply to every provider and deployer of AI in the EU, regardless of the risk category of the tool. There is no transition period. If you have AI in hiring today, this duty is on you today. (Source: AI Act Service Desk, Article 4 timeline page)

2 August 2026. Original enforcement date for Annex III high-risk obligations.

2 August 2026. Original enforcement date for Annex III high-risk obligations.

Under the original AI Act text, the bulk of the high-risk regime, including human oversight, transparency to deployers, and risk management for hiring AI, starts being enforced by national market surveillance authorities on this date. (Source: AI Act Service Desk, implementation timeline)

Under the original AI Act text, the bulk of the high-risk regime, including human oversight, transparency to deployers, and risk management for hiring AI, starts being enforced by national market surveillance authorities on this date. (Source: AI Act Service Desk, implementation timeline)

Under the original AI Act text, the bulk of the high-risk regime, including human oversight, transparency to deployers, and risk management for hiring AI, starts being enforced by national market surveillance authorities on this date. (Source: AI Act Service Desk, implementation timeline)

2 December 2027 (proposed). The Digital Omnibus on AI may postpone Annex III enforcement.

2 December 2027 (proposed). The Digital Omnibus on AI may postpone Annex III enforcement.

In May 2026, the European Parliament and Council reached a provisional political agreement on the Digital Omnibus on AI (COM(2025) 836), which would push the Annex III high-risk application date to 2 December 2027. As of writing, this is a political agreement, not yet adopted law. Until the amending regulation is published in the Official Journal, the binding date remains 2 August 2026. (Source: European Parliament Legislative Train file 2025/0359(COD))


Article 4 itself is not affected by the proposed Omnibus delay. AI literacy stays applicable from February 2025 either way. Planning around the proposed postponement of high-risk enforcement is a sensible thing to do for resource sequencing; treating it as a reason not to act on Article 4 is not.

In May 2026, the European Parliament and Council reached a provisional political agreement on the Digital Omnibus on AI (COM(2025) 836), which would push the Annex III high-risk application date to 2 December 2027. As of writing, this is a political agreement, not yet adopted law. Until the amending regulation is published in the Official Journal, the binding date remains 2 August 2026. (Source: European Parliament Legislative Train file 2025/0359(COD))


Article 4 itself is not affected by the proposed Omnibus delay. AI literacy stays applicable from February 2025 either way. Planning around the proposed postponement of high-risk enforcement is a sensible thing to do for resource sequencing; treating it as a reason not to act on Article 4 is not.

In May 2026, the European Parliament and Council reached a provisional political agreement on the Digital Omnibus on AI (COM(2025) 836), which would push the Annex III high-risk application date to 2 December 2027. As of writing, this is a political agreement, not yet adopted law. Until the amending regulation is published in the Official Journal, the binding date remains 2 August 2026. (Source: European Parliament Legislative Train file 2025/0359(COD))


Article 4 itself is not affected by the proposed Omnibus delay. AI literacy stays applicable from February 2025 either way. Planning around the proposed postponement of high-risk enforcement is a sensible thing to do for resource sequencing; treating it as a reason not to act on Article 4 is not.

What "adequate AI literacy" actually means in practice

What "adequate AI literacy" actually means in practice

The Commission's AI Office FAQ gives the clearest signal of what regulators want to see. Four things, in order:

The Commission's AI Office FAQ gives the clearest signal of what regulators want to see. Four things, in order:

The Commission's AI Office FAQ gives the clearest signal of what regulators want to see. Four things, in order:

  1. Map your AI in hiring.

A written inventory of every AI tool in the recruitment workflow. Embedded ATS features count. Free generative tools recruiters use to draft job ads count. If someone on your team is using it for hiring, it goes in the inventory. This is the work most companies skip and most auditors will ask for first.

A written inventory of every AI tool in the recruitment workflow. Embedded ATS features count. Free generative tools recruiters use to draft job ads count. If someone on your team is using it for hiring, it goes in the inventory. This is the work most companies skip and most auditors will ask for first.

A written inventory of every AI tool in the recruitment workflow. Embedded ATS features count. Free generative tools recruiters use to draft job ads count. If someone on your team is using it for hiring, it goes in the inventory. This is the work most companies skip and most auditors will ask for first.

  1. Match roles to systems.

For each tool, list who uses it, who relies on its output, and who is responsible for overriding it. Recruiters, hiring managers, HR business partners, external sourcing partners. The Commission specifically calls out that contractors and service-provider staff need the same level of literacy as your own employees when they're acting on your behalf.

For each tool, list who uses it, who relies on its output, and who is responsible for overriding it. Recruiters, hiring managers, HR business partners, external sourcing partners. The Commission specifically calls out that contractors and service-provider staff need the same level of literacy as your own employees when they're acting on your behalf.

For each tool, list who uses it, who relies on its output, and who is responsible for overriding it. Recruiters, hiring managers, HR business partners, external sourcing partners. The Commission specifically calls out that contractors and service-provider staff need the same level of literacy as your own employees when they're acting on your behalf.

  1. Differentiate the literacy you provide.

One generic e-learning module sent to all of HR is not what regulators have in mind. Recruiters running automated screening need depth on bias risks and override protocols. HR leadership needs depth on vendor management and the Article 99 penalty framework. Operations needs depth on data governance and logs. The Commission's word for this is "tailored"; in practice it means three or four role-specific tracks, not one.

One generic e-learning module sent to all of HR is not what regulators have in mind. Recruiters running automated screening need depth on bias risks and override protocols. HR leadership needs depth on vendor management and the Article 99 penalty framework. Operations needs depth on data governance and logs. The Commission's word for this is "tailored"; in practice it means three or four role-specific tracks, not one.

One generic e-learning module sent to all of HR is not what regulators have in mind. Recruiters running automated screening need depth on bias risks and override protocols. HR leadership needs depth on vendor management and the Article 99 penalty framework. Operations needs depth on data governance and logs. The Commission's word for this is "tailored"; in practice it means three or four role-specific tracks, not one.

  1. Document everything.

Training register. Materials and syllabi. Attendance records. Simple competency checks or attestations. A review cadence showing how training is updated as your tools change. The Commission has explicitly said there is no required certificate, but national supervisory authorities, including Germany's Bundesnetzagentur, Austria's RTR, and France's CNIL, have been clear in their published guidance that they expect to see records.

Training register. Materials and syllabi. Attendance records. Simple competency checks or attestations. A review cadence showing how training is updated as your tools change. The Commission has explicitly said there is no required certificate, but national supervisory authorities, including Germany's Bundesnetzagentur, Austria's RTR, and France's CNIL, have been clear in their published guidance that they expect to see records.

Training register. Materials and syllabi. Attendance records. Simple competency checks or attestations. A review cadence showing how training is updated as your tools change. The Commission has explicitly said there is no required certificate, but national supervisory authorities, including Germany's Bundesnetzagentur, Austria's RTR, and France's CNIL, have been clear in their published guidance that they expect to see records.

A useful self-test:

If a national market surveillance authority asked you tomorrow to show what your AI literacy programme looks like, how long would it take you to produce the binder? If the answer is more than a week, you're behind. If you can't produce one at all, you have a problem.

If a national market surveillance authority asked you tomorrow to show what your AI literacy programme looks like, how long would it take you to produce the binder? If the answer is more than a week, you're behind. If you can't produce one at all, you have a problem.

If a national market surveillance authority asked you tomorrow to show what your AI literacy programme looks like, how long would it take you to produce the binder? If the answer is more than a week, you're behind. If you can't produce one at all, you have a problem.

What each role-tier needs to know (a working example)

What each role-tier needs to know (a working example)

Adapted from European Commission AI Office FAQ + national supervisory guidance.

Adapted from European Commission AI Office FAQ + national supervisory guidance.

Adapted from European Commission AI Office FAQ + national supervisory guidance.

Role tier

Role tier

Role tier

Literacy focus

Literacy focus

Literacy focus

Typical evidence

Typical evidence

Typical evidence

Recruiters & sourcers

Recruiters & sourcers

What it

measures

Operational use of screening, ranking, generative tools. Spotting hallucinations. GDPR basics. Override protocols.

Operational use of screening, ranking, generative tools. Spotting hallucinations. GDPR basics. Override protocols.

Operational use of screening, ranking, generative tools. Spotting hallucinations. GDPR basics. Override protocols.

Role-specific training log; signed-off override examples

Role-specific training log; signed-off override examples

Role-specific training log; signed-off override examples

Hiring managers

Hiring managers

Core

approach

Interpreting AI scores. Resisting automation bias. Human oversight in practice (Article 14).

Interpreting AI scores. Resisting automation bias. Human oversight in practice (Article 14).

Interpreting AI scores. Resisting automation bias. Human oversight in practice (Article 14).

Decision logs showing human verification of AI-influenced decisions

Decision logs showing human verification of AI-influenced decisions

Decision logs showing human verification of AI-influenced decisions

HR ops / IT

HR ops / IT

AI

proficiency

System configuration. Data inputs. Article 12 logging. Reading vendor instructions for use.

System configuration. Data inputs. Article 12 logging. Reading vendor instructions for use.

System configuration. Data inputs. Article 12 logging. Reading vendor instructions for use.

Configuration records; data quality reviews

Configuration records; data quality reviews

Configuration records; data quality reviews

HR leadership

HR leadership

Candidate

experience

Vendor management. Procurement-stage AI risk review. Article 99 penalty awareness. Works council consultation duties.

Vendor management. Procurement-stage AI risk review. Article 99 penalty awareness. Works council consultation duties.

Vendor management. Procurement-stage AI risk review. Article 99 penalty awareness. Works council consultation duties.

Vendor due-diligence files; works council consultation records (where applicable)

Vendor due-diligence files; works council consultation records (where applicable)

Vendor due-diligence files; works council consultation records (where applicable)

How Bryq's AI Proficiency Assessment supports your Article 4 programme

How Bryq's AI Proficiency Assessment supports your Article 4 programme

Bryq is one component of an Article 4 programme. Not the whole programme. We measure what your people can actually do with AI in scenarios that look like the work, and we produce records you can put in your audit file.

Here's what's measured. Five dimensions, scored 0–100, built on six peer-reviewed research frameworks (UNESCO AI Competency Framework, SFIA v9, OECD AI literacy work, among others):

Bryq is one component of an Article 4 programme. Not the whole programme. We measure what your people can actually do with AI in scenarios that look like the work, and we produce records you can put in your audit file.

Here's what's measured. Five dimensions, scored 0–100, built on six peer-reviewed research frameworks (UNESCO AI Competency Framework, SFIA v9, OECD AI literacy work, among others):

Bryq is one component of an Article 4 programme. Not the whole programme. We measure what your people can actually do with AI in scenarios that look like the work, and we produce records you can put in your audit file.

Here's what's measured. Five dimensions, scored 0–100, built on six peer-reviewed research frameworks (UNESCO AI Competency Framework, SFIA v9, OECD AI literacy work, among others):

Dimension

Dimension

Dimension

What it measures

What it measures

What it measures

AI Task Strategy

AI Task Strategy

What it

measures

When and how to deploy AI in a workflow; recognising the right and wrong moments to use it.

When and how to deploy AI in a workflow; recognising the right and wrong moments to use it.

When and how to deploy AI in a workflow; recognising the right and wrong moments to use it.

Prompting & Interaction

Prompting & Interaction

Core

approach

Effective input design, iteration, and getting useful output from a generative system.

Effective input design, iteration, and getting useful output from a generative system.

Effective input design, iteration, and getting useful output from a generative system.

Critical Evaluation

Critical Evaluation

AI

proficiency

Spotting hallucinations, bias, factual errors, and weak outputs before they reach a decision.

Spotting hallucinations, bias, factual errors, and weak outputs before they reach a decision.

Spotting hallucinations, bias, factual errors, and weak outputs before they reach a decision.

Ethical & Responsible Use

Ethical & Responsible Use

Candidate

experience

Handling sensitive data, transparency to candidates, knowing what to escalate.

Handling sensitive data, transparency to candidates, knowing what to escalate.

Handling sensitive data, transparency to candidates, knowing what to escalate.

Workflow Integration

Workflow Integration

Candidate

experience

Embedding AI into real work without becoming dependent or losing judgement.

Embedding AI into real work without becoming dependent or losing judgement.

Embedding AI into real work without becoming dependent or losing judgement.

Tool-agnostic. Role-universal. 15 minutes per person. No pass/fail. Designed to measure practical decision-making, not knowledge of terminology.

Tool-agnostic. Role-universal. 15 minutes per person. No pass/fail. Designed to measure practical decision-making, not knowledge of terminology.

Tool-agnostic. Role-universal. 15 minutes per person. No pass/fail. Designed to measure practical decision-making, not knowledge of terminology.

Where this sits in your Article 4 programme

The assessment generates structured, exportable evidence of where your workforce stands today, which role-tiers need more support, and how the gap closes over time. It contributes to the "match roles to systems" and "document everything" steps above. It does not, by itself, satisfy Article 4. No single tool does, and the Commission has explicitly said so.

The assessment generates structured, exportable evidence of where your workforce stands today, which role-tiers need more support, and how the gap closes over time. It contributes to the "match roles to systems" and "document everything" steps above. It does not, by itself, satisfy Article 4. No single tool does, and the Commission has explicitly said so.

The assessment generates structured, exportable evidence of where your workforce stands today, which role-tiers need more support, and how the gap closes over time. It contributes to the "match roles to systems" and "document everything" steps above. It does not, by itself, satisfy Article 4. No single tool does, and the Commission has explicitly said so.

A note on Bryq's own classification

Bryq's AI Proficiency Assessment uses AI in candidate evaluation. Under a strict reading of Annex III(4)(a) of the EU AI Act, that places it within the high-risk category. We treat Bryq as a provider of a high-risk AI system and meet the corresponding obligations: documented risk management, data governance, technical documentation, transparency to deployers, and human oversight features. When you audit your vendors as part of your own Article 4 programme, ask for these. We provide them.

Bryq's AI Proficiency Assessment uses AI in candidate evaluation. Under a strict reading of Annex III(4)(a) of the EU AI Act, that places it within the high-risk category. We treat Bryq as a provider of a high-risk AI system and meet the corresponding obligations: documented risk management, data governance, technical documentation, transparency to deployers, and human oversight features. When you audit your vendors as part of your own Article 4 programme, ask for these. We provide them.

Bryq's AI Proficiency Assessment uses AI in candidate evaluation. Under a strict reading of Annex III(4)(a) of the EU AI Act, that places it within the high-risk category. We treat Bryq as a provider of a high-risk AI system and meet the corresponding obligations: documented risk management, data governance, technical documentation, transparency to deployers, and human oversight features. When you audit your vendors as part of your own Article 4 programme, ask for these. We provide them.

Frequently Asked Questions

Frequently Asked Questions

  1. Does Article 4 apply to my company if we use AI in hiring?

Yes. Article 4 applies to all providers and deployers of AI systems placed on the market or used in the EU, regardless of the risk category of the tool. If your hiring team uses any AI in recruitment, including embedded features in your ATS or generative AI to draft job ads, you have an Article 4 duty toward the staff and contractors who use those systems. (Sources: Article 4 EU AI Act; AI Act Service Desk Article 4 page)

Yes. Article 4 applies to all providers and deployers of AI systems placed on the market or used in the EU, regardless of the risk category of the tool. If your hiring team uses any AI in recruitment, including embedded features in your ATS or generative AI to draft job ads, you have an Article 4 duty toward the staff and contractors who use those systems. (Sources: Article 4 EU AI Act; AI Act Service Desk Article 4 page)

Yes. Article 4 applies to all providers and deployers of AI systems placed on the market or used in the EU, regardless of the risk category of the tool. If your hiring team uses any AI in recruitment, including embedded features in your ATS or generative AI to draft job ads, you have an Article 4 duty toward the staff and contractors who use those systems. (Sources: Article 4 EU AI Act; AI Act Service Desk Article 4 page)

  1. What counts as "adequate AI literacy" for hiring teams?

The Commission has deliberately avoided a single definition. The standard is "sufficient" in the context of the role, the risk profile of the tool, and the people affected by its output. For most hiring teams, that means three things: general understanding of how AI works and where it fails; system-specific knowledge for the tools your team actually uses; and role-specific depth on bias, oversight, data protection, and escalation. Generic e-learning modules are unlikely to be enough on their own. (Sources: AI Act Service Desk Article 4; European Commission AI Literacy FAQ)

The Commission has deliberately avoided a single definition. The standard is "sufficient" in the context of the role, the risk profile of the tool, and the people affected by its output. For most hiring teams, that means three things: general understanding of how AI works and where it fails; system-specific knowledge for the tools your team actually uses; and role-specific depth on bias, oversight, data protection, and escalation. Generic e-learning modules are unlikely to be enough on their own. (Sources: AI Act Service Desk Article 4; European Commission AI Literacy FAQ)

The Commission has deliberately avoided a single definition. The standard is "sufficient" in the context of the role, the risk profile of the tool, and the people affected by its output. For most hiring teams, that means three things: general understanding of how AI works and where it fails; system-specific knowledge for the tools your team actually uses; and role-specific depth on bias, oversight, data protection, and escalation. Generic e-learning modules are unlikely to be enough on their own. (Sources: AI Act Service Desk Article 4; European Commission AI Literacy FAQ)

  1. Are we a "provider" or a "deployer" of AI under Article 4?

Almost certainly a deployer, if you use AI tools built by other companies in your hiring workflow. Providers are the companies that develop and place AI systems on the market (your ATS vendor, your assessment vendor). Deployers are organisations using those systems on their own authority. Article 4 applies to both, but the literacy duties of deployers are usually about understanding and overseeing tools you didn't build. (Sources: Article 3(3) and 3(4) EU AI Act; AI Act Service Desk role guidance)

Almost certainly a deployer, if you use AI tools built by other companies in your hiring workflow. Providers are the companies that develop and place AI systems on the market (your ATS vendor, your assessment vendor). Deployers are organisations using those systems on their own authority. Article 4 applies to both, but the literacy duties of deployers are usually about understanding and overseeing tools you didn't build. (Sources: Article 3(3) and 3(4) EU AI Act; AI Act Service Desk role guidance)

Almost certainly a deployer, if you use AI tools built by other companies in your hiring workflow. Providers are the companies that develop and place AI systems on the market (your ATS vendor, your assessment vendor). Deployers are organisations using those systems on their own authority. Article 4 applies to both, but the literacy duties of deployers are usually about understanding and overseeing tools you didn't build. (Sources: Article 3(3) and 3(4) EU AI Act; AI Act Service Desk role guidance)

  1. Do I need to assess current employees, or only new hires?

Both. Article 4 covers "staff and other persons dealing with the operation and use of AI systems," not just new joiners. National supervisory authorities including Germany's Bundesnetzagentur and Austria's RTR have stressed that AI literacy is an ongoing obligation that must be regularly updated and adapted as tools and use cases change. You can phase by risk (start with the teams closest to AI-driven decisions) but you need to be able to show that current staff are covered over time, not just at onboarding. (Sources: Article 4 EU AI Act; Bundesnetzagentur AI literacy guidance; RTR AI Service Desk Austria)

Both. Article 4 covers "staff and other persons dealing with the operation and use of AI systems," not just new joiners. National supervisory authorities including Germany's Bundesnetzagentur and Austria's RTR have stressed that AI literacy is an ongoing obligation that must be regularly updated and adapted as tools and use cases change. You can phase by risk (start with the teams closest to AI-driven decisions) but you need to be able to show that current staff are covered over time, not just at onboarding. (Sources: Article 4 EU AI Act; Bundesnetzagentur AI literacy guidance; RTR AI Service Desk Austria)

Both. Article 4 covers "staff and other persons dealing with the operation and use of AI systems," not just new joiners. National supervisory authorities including Germany's Bundesnetzagentur and Austria's RTR have stressed that AI literacy is an ongoing obligation that must be regularly updated and adapted as tools and use cases change. You can phase by risk (start with the teams closest to AI-driven decisions) but you need to be able to show that current staff are covered over time, not just at onboarding. (Sources: Article 4 EU AI Act; Bundesnetzagentur AI literacy guidance; RTR AI Service Desk Austria)

  1. What happens if we do not comply by 2 August 2026?

Article 4 itself does not carry a standalone fine, but a literacy failure can be cited as an aggravating factor in enforcement under Article 99, which sets fines for high-risk obligation breaches at up to €15 million or 3% of global annual turnover (whichever is higher). If a candidate brings a discrimination complaint over an AI-assisted decision and your team cannot show that the people overseeing the tool understood it well enough to spot the problem, expect that gap to be cited in the file. (Sources: Article 99 EU AI Act; European Commission AI Literacy FAQ)

Article 4 itself does not carry a standalone fine, but a literacy failure can be cited as an aggravating factor in enforcement under Article 99, which sets fines for high-risk obligation breaches at up to €15 million or 3% of global annual turnover (whichever is higher). If a candidate brings a discrimination complaint over an AI-assisted decision and your team cannot show that the people overseeing the tool understood it well enough to spot the problem, expect that gap to be cited in the file. (Sources: Article 99 EU AI Act; European Commission AI Literacy FAQ)

Article 4 itself does not carry a standalone fine, but a literacy failure can be cited as an aggravating factor in enforcement under Article 99, which sets fines for high-risk obligation breaches at up to €15 million or 3% of global annual turnover (whichever is higher). If a candidate brings a discrimination complaint over an AI-assisted decision and your team cannot show that the people overseeing the tool understood it well enough to spot the problem, expect that gap to be cited in the file. (Sources: Article 99 EU AI Act; European Commission AI Literacy FAQ)

  1. What documentation does the EU expect to see in an Article 4 audit?

There is no fixed template, but the Commission FAQ and national supervisory guidance align on what auditors will ask for. At minimum: an AI literacy policy describing your objectives and target groups; a training register listing who was trained, when, and on which systems; the materials or syllabi you used; simple competency checks or attestations; and a review cadence showing how training is updated as tools change. (Sources: European Commission AI Literacy FAQ; Bundesnetzagentur guidance; RTR Austria deployer obligations)

There is no fixed template, but the Commission FAQ and national supervisory guidance align on what auditors will ask for. At minimum: an AI literacy policy describing your objectives and target groups; a training register listing who was trained, when, and on which systems; the materials or syllabi you used; simple competency checks or attestations; and a review cadence showing how training is updated as tools change. (Sources: European Commission AI Literacy FAQ; Bundesnetzagentur guidance; RTR Austria deployer obligations)

There is no fixed template, but the Commission FAQ and national supervisory guidance align on what auditors will ask for. At minimum: an AI literacy policy describing your objectives and target groups; a training register listing who was trained, when, and on which systems; the materials or syllabi you used; simple competency checks or attestations; and a review cadence showing how training is updated as tools change. (Sources: European Commission AI Literacy FAQ; Bundesnetzagentur guidance; RTR Austria deployer obligations)

  1. How does Article 4 interact with the broader EU AI Act high-risk classification for hiring tools?

Article 4 is a horizontal obligation that applies to every AI system regardless of risk category. Hiring AI is separately classified as high-risk under Annex III, point 4 of the AI Act, which triggers additional duties under Articles 13 (transparency), 14 (human oversight), and 26 (deployer obligations). Meeting Article 14 oversight requirements for one specific high-risk tool does not, on its own, demonstrate Article 4 compliance across your wider AI-supported hiring workflow. The two operate in parallel. (Sources: Articles 4, 6, 13, 14, 26 EU AI Act; Annex III(4))

Article 4 is a horizontal obligation that applies to every AI system regardless of risk category. Hiring AI is separately classified as high-risk under Annex III, point 4 of the AI Act, which triggers additional duties under Articles 13 (transparency), 14 (human oversight), and 26 (deployer obligations). Meeting Article 14 oversight requirements for one specific high-risk tool does not, on its own, demonstrate Article 4 compliance across your wider AI-supported hiring workflow. The two operate in parallel. (Sources: Articles 4, 6, 13, 14, 26 EU AI Act; Annex III(4))

Article 4 is a horizontal obligation that applies to every AI system regardless of risk category. Hiring AI is separately classified as high-risk under Annex III, point 4 of the AI Act, which triggers additional duties under Articles 13 (transparency), 14 (human oversight), and 26 (deployer obligations). Meeting Article 14 oversight requirements for one specific high-risk tool does not, on its own, demonstrate Article 4 compliance across your wider AI-supported hiring workflow. The two operate in parallel. (Sources: Articles 4, 6, 13, 14, 26 EU AI Act; Annex III(4))

  1. Do I need to consult my works council before deploying AI in hiring?

In most major EMEA jurisdictions, yes. In Germany, the introduction of AI tools used in hiring or performance evaluation triggers Betriebsrat co-determination under §§ 87, 90, 95 BetrVG. In France, the CSE must be consulted on new technologies under Article L.2312-8 of the Code du travail (the Nanterre court suspended an AI deployment in February 2025 specifically over a missed CSE consultation). The Netherlands, Austria, Italy, and Belgium have similar regimes. Article 4 documentation does not replace this duty, but a well-built literacy programme makes the works council conversation significantly easier. (Sources: BetrVG; Code du travail; Nanterre TJ ordonnance de référé 14 February 2025; WOR Article 27)

In most major EMEA jurisdictions, yes. In Germany, the introduction of AI tools used in hiring or performance evaluation triggers Betriebsrat co-determination under §§ 87, 90, 95 BetrVG. In France, the CSE must be consulted on new technologies under Article L.2312-8 of the Code du travail (the Nanterre court suspended an AI deployment in February 2025 specifically over a missed CSE consultation). The Netherlands, Austria, Italy, and Belgium have similar regimes. Article 4 documentation does not replace this duty, but a well-built literacy programme makes the works council conversation significantly easier. (Sources: BetrVG; Code du travail; Nanterre TJ ordonnance de référé 14 February 2025; WOR Article 27)

In most major EMEA jurisdictions, yes. In Germany, the introduction of AI tools used in hiring or performance evaluation triggers Betriebsrat co-determination under §§ 87, 90, 95 BetrVG. In France, the CSE must be consulted on new technologies under Article L.2312-8 of the Code du travail (the Nanterre court suspended an AI deployment in February 2025 specifically over a missed CSE consultation). The Netherlands, Austria, Italy, and Belgium have similar regimes. Article 4 documentation does not replace this duty, but a well-built literacy programme makes the works council conversation significantly easier. (Sources: BetrVG; Code du travail; Nanterre TJ ordonnance de référé 14 February 2025; WOR Article 27)

  1. Is Bryq itself a high-risk AI system under Annex III(4)?

Bryq's AI Proficiency Assessment is used in candidate evaluation, which falls within Annex III, point 4(a) of the EU AI Act. We treat the system as high-risk and operate as a provider under Chapter III. That means documented risk management, representative training data, technical documentation, logging, transparency to deployers, and built-in support for human oversight. If you're running vendor diligence as part of your own programme, we can provide the corresponding documentation. (Sources: Annex III(4) EU AI Act; Recital 57)

Bryq's AI Proficiency Assessment is used in candidate evaluation, which falls within Annex III, point 4(a) of the EU AI Act. We treat the system as high-risk and operate as a provider under Chapter III. That means documented risk management, representative training data, technical documentation, logging, transparency to deployers, and built-in support for human oversight. If you're running vendor diligence as part of your own programme, we can provide the corresponding documentation. (Sources: Annex III(4) EU AI Act; Recital 57)

Bryq's AI Proficiency Assessment is used in candidate evaluation, which falls within Annex III, point 4(a) of the EU AI Act. We treat the system as high-risk and operate as a provider under Chapter III. That means documented risk management, representative training data, technical documentation, logging, transparency to deployers, and built-in support for human oversight. If you're running vendor diligence as part of your own programme, we can provide the corresponding documentation. (Sources: Annex III(4) EU AI Act; Recital 57)

  1. Does Bryq's AI Proficiency Assessment satisfy Article 4 on its own?

No, and we want to be direct about that. Article 4 is a programme obligation: inventory, role mapping, training delivery, governance, and documentation. Bryq is one component of that programme. We measure the literacy of your workforce in a structured, defensible way and produce records you can put in your audit file. The rest of the programme (policies, training delivery, override protocols, works council engagement) sits with you. The Commission has been explicit that no single tool grants a presumption of compliance. (Sources: Article 4 EU AI Act; European Commission AI Literacy FAQ; AI Act Service Desk)

No, and we want to be direct about that. Article 4 is a programme obligation: inventory, role mapping, training delivery, governance, and documentation. Bryq is one component of that programme. We measure the literacy of your workforce in a structured, defensible way and produce records you can put in your audit file. The rest of the programme (policies, training delivery, override protocols, works council engagement) sits with you. The Commission has been explicit that no single tool grants a presumption of compliance. (Sources: Article 4 EU AI Act; European Commission AI Literacy FAQ; AI Act Service Desk)

No, and we want to be direct about that. Article 4 is a programme obligation: inventory, role mapping, training delivery, governance, and documentation. Bryq is one component of that programme. We measure the literacy of your workforce in a structured, defensible way and produce records you can put in your audit file. The rest of the programme (policies, training delivery, override protocols, works council engagement) sits with you. The Commission has been explicit that no single tool grants a presumption of compliance. (Sources: Article 4 EU AI Act; European Commission AI Literacy FAQ; AI Act Service Desk)

Get an outside read on where your team stands

Get an outside read on where your team stands

If you'd like a structured view of your hiring team's AI literacy before enforcement starts, we run 30-minute readiness calls. We'll walk through where Article 4 sits in your current programme, where the gaps are most likely to be for an organisation your size, and what a defensible audit binder looks like in practice. No demo, no sales pitch. If Bryq turns out to be useful, we'll tell you. If not, you leave with a clearer picture of what to do next.

If you'd like a structured view of your hiring team's AI literacy before enforcement starts, we run 30-minute readiness calls. We'll walk through where Article 4 sits in your current programme, where the gaps are most likely to be for an organisation your size, and what a defensible audit binder looks like in practice. No demo, no sales pitch. If Bryq turns out to be useful, we'll tell you. If not, you leave with a clearer picture of what to do next.

If you'd like a structured view of your hiring team's AI literacy before enforcement starts, we run 30-minute readiness calls. We'll walk through where Article 4 sits in your current programme, where the gaps are most likely to be for an organisation your size, and what a defensible audit binder looks like in practice. No demo, no sales pitch. If Bryq turns out to be useful, we'll tell you. If not, you leave with a clearer picture of what to do next.

Ready to see Bryq in action?

Start hiring based on real data.

Ready to see Bryq in action?

Start hiring based on real data.

Ready to see Bryq in action?

Start hiring based on

real data.

Join our community!
Get the latest HR trends & tips delivered to your inbox.

Join our community!
Get the latest HR trends & tips delivered to your inbox.

Join our community!
Get the latest HR trends & tips delivered to your inbox.