EU AI Act · Art. 5–7 & Annex III

AI Risk Classification under EU AI Act

The EU AI Act divides all AI systems into four risk classes. The classification determines which compliance obligations you must fulfill. Understand the criteria – and classify your AI systems correctly.

The 4 Risk Classes in Detail

Unacceptable Risk
Prohibited from February 2025
Examples
  • Government social scoring systems
  • Real-time biometric remote identification in public spaces (with few exceptions)
  • AI to manipulate behavior of vulnerable groups
  • AI exploiting weaknesses or vulnerabilities
  • Predictive policing based on profiling
Compliance Requirements

Prohibited. Such systems may not be deployed or placed on the market in the EU.

High Risk
Extensive documentation obligations
Examples
  • AI in critical infrastructure (energy, water, transport)
  • AI in education (access decisions)
  • Recruitment AI, promotion decisions
  • AI in credit and creditworthiness assessment
  • AI in law enforcement and justice
  • AI in healthcare (medical devices)
  • AI for border control and migration
  • Biometric identification systems
Compliance Requirements

Risk management system, full technical documentation, conformity assessment, CE marking, registration in EU database.

Limited Risk
Transparency obligations
Examples
  • Chatbots and virtual assistants
  • Deepfake image and video generators
  • AI-generated content (text, audio, video)
  • Emotion recognition systems
Compliance Requirements

Users must be informed they are interacting with an AI system. AI-generated content must be labeled as such.

Minimal Risk
No specific obligations
Examples
  • AI-based spam filters
  • AI in video games and entertainment
  • AI-supported production planning
  • Simple recommendation systems
  • AI-supported inventory management
Compliance Requirements

No legal obligations. Voluntary codes and best practices recommended.

Decision Aid: Is My System High-Risk?

1
Is the system listed in Annex III of the EU AI Act?
Annex III explicitly lists 8 categories of high-risk AI.
2
Does the system make decisions with significant impact on people?
E.g. credit granting, hiring, education access, medical diagnosis.
3
Is the system a safety component of a product under EU directives?
E.g. in machinery, vehicles, medical devices.
4
Are special categories of personal data processed?
Health data, biometric data, political opinions, etc.

How to Classify Your AI System

1
Identify the System
Define the exact use case, target group, and decisions made.
2
Check Annex III
Verify whether the system is listed in one of the 8 categories in Annex III.
3
Assess Impact
Evaluate what impact errors or wrong decisions by the system would have.
4
Document Classification
Document the classification with justification – this is subject to audit.

Frequently Asked Questions about AI Risk Classification

What happens if I misclassify my system?

Incorrect classifications can lead to fines. Underclassifying the actual risk level constitutes non-fulfillment of compliance obligations and is a violation of the EU AI Act.

Can classifications change?

Yes. If the purpose, target group, or functionality of the system changes, the classification must be reviewed. Annex III can also be adjusted by the EU Commission through delegated acts.

Who is responsible for the classification?

As an operator you share responsibility for the classification. Providers bear primary responsibility for systems they develop themselves.

AI Risk Classification with SimpleAct

SimpleAct guides you through risk classification of your AI systems using rule-based logic and automatically generates EU AI Act-compliant documentation.

Classify AI systems now

Related Topics

Arturs Nikitins
KI-Risikoeinstufung nach EU AI Act – Wie stufen Sie Ihr KI-System ein? | SimpleAct | SimpleAct