Back to BlogEU AI Act Risk Levels Explained: Minimal, Limited and High-Risk
EU AI Act

EU AI Act Risk Levels Explained: Minimal, Limited and High-Risk

What do minimal risk, limited risk and high-risk mean under the EU AI Act? This article explains how companies can classify AI systems in practice and avoid common mistakes.

March 26, 2026
Kamill Jarzebowski | SimpleAct
7 min read
EU AI ActRisikoeinstufungKI-RisikobewertungHigh-Risk AILimited RiskMinimal RiskKI-Compliance
EU AI Act Risk Levels Explained: Minimal, Limited and High-Risk

Understanding Risk Levels Under the EU AI Act

The key rule: do not start with the tool, start with the use case

When it comes to the EU AI Act, most companies quickly stumble over three terms: minimal risk, limited risk, and high risk. It sounds neat at first. In practice, it often creates uncertainty.

Is ChatGPT automatically minimal risk? Is a website chatbot already limited risk? And what about a recruiting tool that only pre-sorts applicants but doesn’t make the final decision?

The good news: The risk categories in the EU AI Act can be explained clearly. The bad news: It’s not enough to memorize a list of tools. What really matters is the specific use case.

This article explains how companies can practically classify their AI systems under the EU AI Act. No legal jargon—just clear examples and a logic that works in everyday business.


First things first: It’s not the tool that matters most, but how it’s used

This is the most important rule. Many companies look for a simple list like: ChatGPT = low risk, recruiting AI = high risk, chatbot = limited risk. The AI Act doesn’t work that way.

Too simplistic

“We only use ChatGPT, so it’s minimal risk.” That can be true—but it doesn’t have to be. Once a language model is embedded in sensitive processes, things quickly become more complex.

The better question

What exactly does the system do? Does it just help with writing? Does it interact with customers? Does it evaluate or prioritize people? That’s what determines the risk category.

The same base model can be relatively harmless in a marketing context, but much more sensitive in recruiting or credit decisions.


Overview of the three relevant risk categories

For most companies, three categories matter in practice: minimal risk, limited risk, and high risk. There are also prohibited practices—but those are not a risk category, they are the red line of the AI Act.

Minimal risk

Most everyday assistance systems fall into this category. Fewer regulatory requirements—but not automatically zero governance.

Limited risk

Primarily about transparency obligations. Users should be able to recognize when they are interacting with AI or viewing AI-generated content.

High risk

When AI is used in sensitive areas—such as employment, education, credit, or critical infrastructure—requirements increase significantly.


Minimal risk: The most common case, but not obligation-free

Most AI tools companies use today fall into the minimal risk category—such as drafting texts, internal summaries, brainstorming, translation, or design support.

But this does not mean: “We don’t need to worry about it.” Even minimal risk systems can create operational issues if teams input sensitive data into open tools, publish content without review, or lack visibility into which AI systems are actually in use.

Typical examples

Internal text drafts

Brainstorming with LLMs

Meeting summaries

Coding assistance

Still check

What data is being used?

Who is using the tool?

Is there review before publication?

Is the use case properly documented?


Limited risk: Mostly about transparency

Limited risk is often the most underestimated category—not because it’s complex, but because it’s very close to everyday use.

When users interact with an AI chatbot, when content is clearly AI-generated, or when synthetic media is used, transparency becomes key. People should be able to recognize that AI is involved.

This applies not only to large platforms, but also to mid-sized company websites, customer service teams, marketing departments, and SaaS products with assistant features.


1. Chatbots and AI assistants
If a user interacts with an AI system, it should be clear that they are not talking to a human. This is a classic transparency obligation under limited risk.


2. AI-generated content
If you publish heavily AI-generated content, you should check whether labeling or at least clear internal disclosure rules are required.


3. Synthetic media
Images, voices, or videos that appear highly authentic are particularly sensitive. Transparency should be considered early in the production process—not as an afterthought.


High risk: When AI influences decisions about people or critical areas

High risk is where many companies initially think: “This probably doesn’t apply to us.” That’s exactly why it’s often identified too late.

The EU AI Act focuses on sensitive use cases—not just the technology itself. If AI affects access to jobs, education, credit, services, or safety, things become serious.

Area Why it’s sensitive
Recruiting and HR AI directly affects employment opportunities when it evaluates, filters, or prioritizes candidates.
Credit and insurance Scoring systems directly impact financial participation and access to services.
Education and assessment Automated grading or classification can affect real educational outcomes.
Critical infrastructure / safety Errors here are not just inconvenient—they can be safety-critical.

The most common misconception: “A human makes the final decision”

This is one of the biggest misunderstandings. Many teams assume a system cannot be critical if a human ultimately clicks “confirm.”

That’s not correct. If AI pre-sorts candidates, prioritizes cases, assigns scores, or filters options, it already shapes the decision space. The human no longer sees the full picture—only a pre-structured version.

That’s why it’s not just the final decision that matters, but the influence of AI on the entire process.


Four questions for an initial classification

You don’t need a perfect legal analysis to get started. These four questions go a long way:

1. Does the system directly affect people?
Applicants, customers, employees, students?

2. Does the system interact with users?
If yes, transparency obligations often apply.

3. Does the system evaluate or prioritize?
Scoring, ranking, filtering, and recommendations are more sensitive than pure assistance.

4. What data is used?
The more sensitive or personal the data, the more carefully you need to assess.


The biggest risk isn’t complexity—it’s delay

Many companies think classification must be perfect from the start. That often leads to inaction. A better approach is pragmatic: identify systems, document use cases, assign a preliminary risk level, flag open questions, and then review critical cases in more detail.

That’s not sloppy—it’s better than waiting months for a perfect solution.


Common mistakes in risk classification

Common mistakes

“A well-known tool must be harmless.”

“A human is involved, so it’s not critical.”

“We’ll classify it later when things are clearer.”

“The vendor will have taken care of compliance.”


Better approach

Evaluate the use case—not the product name.

Scrutinize scoring, ranking, and prioritization.

Start with a preliminary classification.

Actively request vendor documentation.


In practice, compliance means: understand, classify, document

For most companies, working with risk categories comes down to three steps:

1. Know which AI systems you use.
2. Understand what they actually do.
3. Clearly document why you classified them the way you did.

This won’t answer every legal detail—but it creates the foundation for meaningful compliance.


The fastest way to classify risk

SimpleAct is built exactly for this step: capture AI systems, classify them based on rules, make required measures visible, and export documented results. No Excel chaos, no scattered notes, and no starting from scratch each time.

Get started for free →


This article is for general information only and does not constitute legal advice. For specific cases, legal consultation is recommended. Status: March 2026.


About SimpleAct: SimpleAct is a German compliance platform that helps companies document their AI systems in line with the EU AI Act—from system inventory to risk assessment and audit-ready reporting, all in one place.

Learn more →

Tags

EU AI ActRisikoeinstufungKI-RisikobewertungHigh-Risk AILimited RiskMinimal RiskKI-Compliance
K

Kamill Jarzebowski | SimpleAct

Author · SimpleAct Team

Yannick Heisler

Yannick Heisler

Vertrieb · Persönliche Beratung