AI in recruiting is no longer a pilot project. CV filters, automated matching, skill scoring, AI-assisted interviews, and recommendation systems for promotions or transfers are already in use in most HR departments. Often deployed without management knowing that this triggers "high-risk AI system" status under the EU AI Act.
That's exactly the case. Annex III, point 4 of the regulation explicitly lists AI systems in "employment, workers management and access to self-employment" as high-risk. That places HR alongside law enforcement, migration, and critical infrastructure among the most heavily regulated application areas of the entire regulation.
For HR leadership, this is more than a compliance question. It's a leadership decision. Which tools are deployed, who is responsible, how risks are distributed between HR, IT, and legal, all of this is becoming a C-level governance task in the coming months. This article sets out what Annex III point 4 means for HR leadership and which strategic decisions need to be made now.
What Annex III point 4 actually covers
The legal text is divided into two parts, both of which directly affect HR work.
Annex III, point 4(a): Recruitment and selection
AI systems for "the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates."
Annex III, point 4(b): Workforce management
AI systems "to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate the performance and behaviour of persons in such relationships."
That's a broad scope, and deliberately so. Recital 57 explains why: AI in the employment context can "appreciably impact future career prospects, livelihoods of those persons and workers' rights." That's the basis for the high-risk classification.
Which tools are actually affected
The list is longer than most HR departments expect. As soon as a system can rank, filter, evaluate candidates, or provide recommendations for personnel decisions, it's in scope.
Important: AI doesn't need to be the sole decision-maker for high-risk classification to apply. Even "material influence" on the decision (ranking, filtering, scoring) is enough. The often-cited "human in the loop" doesn't exempt you from high-risk classification. It's part of human oversight under Art. 14, not a way out of Annex III.
What changes for HR leadership in practice
High-risk classification triggers a whole series of obligations. Seven points are particularly relevant for HR leadership:
Complete AI inventory in HR
Which AI systems are deployed in HR? Including embedded AI features in applicant tracking, learning platforms, HRIS, people analytics. Shadow AI is particularly common in HR because tools are often procured by business units without IT or compliance involvement.
Name clear ownership
Who owns AI compliance in HR? The CHRO, the DPO, an AI Officer, or a shared responsibility with IT? Without a clear role, accountability defaults to the person who signed the contract, and that's rarely the same person who understood the compliance risk.
Vet providers systematically
As a deployer, you can't trust providers blindly. Ask: Has the provider classified the system as high-risk? Is there a declaration of conformity? CE marking? Registration in the EU database? A provider that dodges these questions is a risk for your company.
Make human oversight real
Art. 14 requires human oversight of high-risk AI. In HR context that means concretely: a real person reviews pre-selection results, can override them, and does so in a documented way. "The recruiter saw the system" isn't enough. It must be visible that and how the human decision influenced the AI result.
Inform candidates and employees
Art. 26(11) requires that affected persons be informed about the use of high-risk AI. For candidates this means: a notice in the application process, ideally already in the job posting. For employees, Art. 26(7) also applies: workers and their representatives must be informed before deployment, regardless of whether a works council exists.
Check fundamental rights impact assessment
Art. 27 obliges certain deployers to conduct a fundamental rights impact assessment before deploying a high-risk system. This applies particularly to public bodies and private providers of public services. But private companies should also check whether their existing GDPR Art. 35 data protection impact assessment needs to be expanded with AI-specific aspects.
Actively manage discrimination risk
AI in recruiting isn't just an AI Act topic, it's also an anti-discrimination topic. A poorly trained AI can systematically disadvantage certain groups. If proven, damage claims can arise independently of the AI Act. HR leadership should work with legal to define how bias tests, audit protocols, and response processes are set up.
The strategic questions for HR leadership
These obligations lead to three C-level decisions that cannot be delegated.
Question 1: Where do we draw the line between efficiency and control?
AI can dramatically speed up the recruiting process. It can also lead to decisions that nobody can explain anymore. Where does the balance lie for your company? Is the speedup worth the effort of human oversight? And which decisions stay deliberately "human", even when AI could make them faster?
Question 2: Who bears responsibility when something goes wrong?
An AI system makes a questionable decision. A candidate files a discrimination complaint. The Federal Network Agency announces a market surveillance audit. Who in your company can act? The CHRO? IT? The DPO? Without a clear ownership structure, gaps open up that get interpreted later as compliance failures.
Question 3: How do we communicate this internally and externally?
AI in recruiting is a sensitive topic for candidates, for employees, and for works councils. The worst strategy is to deploy tools quietly and wait for complaints. The better strategy is proactive transparency: Which tools do we use? For what? Who reviews? How can someone object? Confident handling builds trust. Hidden deployment undermines it.
An important gray area: When does HR become a provider?
Many HR departments think: We're buying the tool, so we're just deployers. That's only true as long as the tool is used unchanged. Anyone who substantially modifies a recruiting tool, for example by retraining it on their own historical hiring data or extending it with criteria the provider didn't intend, becomes a provider under Art. 25 of the AI Act, with all corresponding obligations (technical documentation, conformity assessment, CE marking, EU database registration).
Using a generic AI for HR purposes can also cross this threshold. Anyone building an internal candidate assistant on top of an LLM API that sorts profiles changes the intended purpose and likely becomes a provider of a high-risk system. This is a significant expansion of responsibility that's often overlooked in practice.
What needs to happen now
Most HR leaders don't have to solve everything at once, but they have to start. A pragmatic sequence:
Note: Even if the Digital Omnibus package may postpone high-risk obligations to 2 December 2027, the timeline above remains the planning basis. The AI literacy obligation and transparency requirements are independently active.
Summary
AI in recruiting and workforce management is a high-risk area under the EU AI Act, no exceptions, no discretion. HR leadership faces a governance task that cannot be solved within HR alone. It requires clear ownership, a systematic inventory, structured vendor vetting, documented human oversight, transparent communication, and ongoing monitoring.
Companies that structure this now gain more than compliance. They gain clarity about which tools make which decisions, they build trust with candidates and employees, and they reduce the risk of discrimination claims, regulatory audits, and fines.
Those who wait until the first complaint arrives have missed the moment when compliance was still a leadership decision. By then, it becomes a reaction to a problem.
Document AI in HR with structure
SimpleAct turns compliance requirements into a controlled process. AI inventory, risk classification, human oversight, candidate notifications, audit trail. All in one place.
Get started for free →This article is for general information purposes only and does not constitute legal advice. Requirements may change with the Digital Omnibus on AI. Last updated: April 2026.
About SimpleAct: SimpleAct is a German compliance platform that helps companies structurally document their AI systems in accordance with the EU AI Act. From registration to risk assessment to exportable audit reports. All in one place.
Tags
Yannick | SimpleAct Team
Author · SimpleAct Team
