Annex III · EU AI Act

High-Risk AI under Annex III EU AI Act

If your AI system falls under Annex III of the EU AI Act, extensive obligations apply: technical documentation, conformity assessment, registration and ongoing monitoring. Check here whether your system is affected.

What makes an AI system high-risk?

An AI system is considered high-risk if it is used in one of the areas listed in Annex III AND has a significant influence on decisions affecting the rights or safety of persons. Additionally, AI systems serving as safety components of products must meet the same requirements.

The 8 categories under Annex III

1. Biometric identification

Facial recognition, fingerprint systems, emotion recognition, biometric categorisation by race, political opinion or sexual orientation

2. Critical infrastructure

Management systems for energy, water, waste, transport, digital infrastructure and financial markets

3. Education and vocational training

Access decisions to educational institutions, exam assessment, learning progress tracking, competence evaluation

4. Employment and HR management

Recruitment screening, performance assessment, promotion decisions, termination and task allocation systems

5. Essential services

Credit scoring, social benefit decisions, emergency service dispatch, health insurance classification

6. Law enforcement

Crime risk assessment, lie detection, crime prevention, evidence analysis

7. Migration and border control

Risk assessment at border crossings, processing of asylum and visa applications, border area surveillance

8. Justice and democratic processes

AI assistance in judicial decisions, sentencing, electoral influence, political advertising with targeting

Obligations for high-risk AI deployers

Quick check: Is your AI system affected?

  • Is the system used in one of the 8 areas under Annex III?
  • Does the system have significant influence on decisions affecting people?
  • Does the system serve as a safety component of a regulated product?

If you answered yes to one or more questions, your system is likely to be classified as high-risk AI. A legal review is recommended.

Frequently asked questions about high-risk AI

Is every HR system considered high-risk AI?

Not automatically. An HR system is high-risk if it contains AI components for recruitment, performance assessment or termination and these have significant influence on decisions. Simple administrative systems without decision relevance are not covered.

What are the consequences of violations?

For prohibited AI use: up to €35 million or 7% of annual turnover. For other high-risk obligation violations: up to €15 million or 3% of annual turnover. Market withdrawals and operational shutdowns can also be ordered.

Who is responsible – provider or deployer?

Both bear responsibility, but different kinds. Providers (who develop the system) bear primary responsibility for documentation and conformity assessment. Deployers (who use it) must ensure oversight, logging and proper use.

Does Annex III also apply to AI systems from third countries?

Yes. If an AI system from a third country is deployed in the EU or its outputs are used in the EU, the EU AI Act applies. Importers and distributors have their own compliance obligations.

Related topics

Document high-risk AI with SimpleAct

SimpleAct helps you identify high-risk AI systems, document them comprehensively and fulfil all EU AI Act obligations – in one structured, auditable system.

Get started for free
Arturs Nikitins
Hochrisiko-KI nach Anhang III EU AI Act – Welche Systeme betroffen sind | SimpleAct | SimpleAct