Back to BlogWhat Happens If I Classify Wrong? Liability, Fines and Audit Risk Under the EU AI Act
EU AI Act

What Happens If I Classify Wrong? Liability, Fines and Audit Risk Under the EU AI Act

A wrong AI risk classification has consequences across three dimensions: personal liability of management, fines of up to €15 million, and audit risk in market surveillance. A factual overview of what the EU AI Act and national law provide for.

April 24, 2026
Yannick | Simpleact Team
8 min read
EU AI ActKI-ComplianceRisikoklassen

Classifying an AI system into one of the four EU AI Act risk categories sounds like a formality. Fill in a questionnaire, read the result, move on. In practice, it's the decision that determines the entire compliance program that follows.

That's why there's a strong temptation to simplify. A tool sitting somewhere in the gray zone between "limited risk" and "high risk" gets classified as limited risk. It's faster. And if it really is high risk, nobody will notice anyway.

The problem: someone does notice. Not necessarily tomorrow, but at the latest during the first audit, the first complaint, or the first incident. At that point, the question is no longer whether to correct the classification, but what consequences the misclassification has already triggered.

This article explains the three dimensions in which misclassifications have consequences: liability, fines, and audit risk. Not as a scare tactic, but as a factual overview of what the EU AI Act and national law provide for.


Why misclassifications happen systematically

Before we get to consequences, a look at typical causes. Misclassifications are rarely intentional. They arise from structural problems:

Context blindness: The same tool can be classified differently in different contexts. A marketing chatbot is limited risk. The same chatbot making credit recommendations in a bank can be high risk. Whoever classifies without context almost inevitably classifies incorrectly.

Missing criteria knowledge: Annex III of the AI Act lists eight areas where AI systems qualify as high risk, including recruitment, credit scoring, education assessment, access to public services. Whoever doesn't know the list can't check whether it applies.

Organizational distance: The person doing the classification often isn't the person using the tool. IT classifies a recruiting tool based on vendor information, while HR uses it differently than intended. The classification remains formally correct while diverging from reality.

Downgrade tendency: A higher risk class means more documentation effort. Anyone under time pressure tends to choose the lower class when in doubt. Human, but not compliant.


Dimension 1: Liability

Liability for misclassification is more complex than often presented. The EU AI Act itself doesn't regulate civil liability. It regulates public law obligations. Civil liability follows from national law and, in the medium term, from adjacent EU legislation.

In concrete terms:

Personal liability of management

In Germany, managing directors are personally liable to the company under § 43 GmbHG for ensuring compliance with legal requirements. Anyone who violates compliance obligations or fails to ensure their implementation is personally liable for resulting damages. Similar directors-and-officers liability rules apply in most EU jurisdictions.

Liability toward affected persons

When people are disadvantaged by a misclassified AI, such as through an algorithmic rejection in recruiting, damage claims under anti-discrimination law, civil codes, or GDPR can arise. The misclassification often becomes central evidence: if the system was classified as minimal risk but actually served high-risk use cases, this becomes evidence of negligence.

Contractual liability toward customers

B2B customers increasingly ask for AI compliance documentation during contract negotiations. Assurances based on false classifications can lead to breach of contract, penalties, or termination. Particularly relevant for companies embedding AI into products that end up in regulated industries.

Medium term: Product Liability Directive

The EU Commission withdrew the originally planned AI Liability Directive in 2025 but is working on alternatives. The revised Product Liability Directive (PLD) has been in force since December 2024 and covers AI systems. It makes it easier for injured parties to prove damages caused by defective AI.


Dimension 2: Fines

Article 99 of the EU AI Act defines a three-tier fine system. For misclassifications, two tiers are particularly relevant:

Violation Fine range Typical scenario
Violation of high-risk AI obligations up to €15 million or 3% of global annual turnover High-risk system classified as limited risk and operated without risk management
Incorrect information to authorities up to €7.5 million or 1% of global annual turnover Incorrect risk classification stated in audits or documentation to supervisory authorities
Violation of prohibited practices up to €35 million or 7% of global annual turnover Extreme case: a system actually prohibited under Art. 5 was classified as permissible

Important for SMEs and startups: Under Art. 99(6), small and medium-sized enterprises face the lower of the two thresholds (fixed amount or turnover percentage). A startup with €2 million annual turnover faces a maximum top-tier fine of €140,000, not €35 million. This significantly reduces the amounts, but doesn't change the violation itself.

When determining the actual fine amount, supervisory authorities consider multiple factors: the nature, severity, and duration of the violation, the number of affected persons, the degree of fault, previous violations, cooperation with the authority, and company size. An intentional or grossly negligent misclassification leads to significantly higher fines than a demonstrably honest mistake.

In practice, this means: a traceable, documented risk assessment can make the difference between a low and a high fine. An undocumented classification is not only wrong, but worsens the legal consequences.


Dimension 3: Audit risk

The third dimension is often underestimated, but in practice the most likely. While multi-million-euro fines primarily hit corporations, audit problems are real for any company.

Audit risk arises in several constellations:

Regulatory market surveillance: Under Art. 74 of the AI Act, national market surveillance authorities (in Germany: the Federal Network Agency) monitor compliance with the regulation. They can conduct risk-based sampling, request documentation, and perform on-site audits. A wrong classification is identified at the latest when the assigned risk class is checked against the actual deployment context.

Internal audits and ISO 42001: Companies seeking ISO 42001 certification must have their AI governance, including classification processes, audited. External auditors ask targeted questions: "By what criteria was this classified? Who reviewed the classification? When was the last review?" An insufficiently justified classification leads to audit findings.

Customer audits in B2B relationships: Especially in regulated industries (finance, healthcare, insurance), customers audit their suppliers for compliance. An AI vendor that can't explain why their system was classified as limited risk loses contracts. The classification becomes part of the service commitment.

Whistleblowing and reports: Employees who notice a misclassification can report it under whistleblower protection laws. Such a report typically triggers an intensive internal and potentially regulatory review.


What makes a defensible classification

The common denominator across all three dimensions is traceability. A risk classification that can be reconstructed reduces liability, fines, and audit risk alike. A classification that cannot be reconstructed increases all three.

Traceable means, concretely:

Documented criteria: Which questions were evaluated? What answers were given? Which criteria led to the classification?

Defined use context: In what specific context is the system deployed? Who uses it for what? Which data is processed? Which decisions does it influence?

Named responsible person: Who performed the classification? Who reviewed it? Who approved it?

Timestamps and versioning: When was it classified? When was it last reviewed? What changes were made?

Event-based review: When the system, deployment context, or legal situation changes, the classification is reassessed and documented.

All of this can in principle be mapped in a spreadsheet. But it requires discipline, and usually lacks automatic versioning and a tamper-proof audit log. Specialized tools take this part off your hands by generating documentation as a byproduct of the assessment process.


The difference between an error and a misclassification

Important to understand: not every subsequent correction of a risk classification is a misclassification in the legal sense.

When a company performs a classification, documents it, applies the criteria, and arrives at a result that is later corrected (due to changed deployment context or new insights), this is not an error but part of a functioning compliance process. Regular reviews are explicitly anticipated.

It becomes problematic when the original classification was made without appropriate review, when criteria weren't applied, or when the deployment context was deliberately misrepresented. That's no longer a correction, but the discovery of a misclassification.

The line between them runs along one question: was the original assessment diligent given the knowledge available at the time? If yes, the later correction is unproblematic. If not, the question of consequences arises.


Conclusion

Risk classification under the EU AI Act is not a formality. It's the central determination of which documentation, review, and oversight obligations apply to a company. A wrong classification has consequences across three dimensions: it can trigger liability claims, lead to fines, and turn audits into problems.

The good news: all three risks can be significantly reduced by making the classification traceable. Documented criteria, clear deployment context, named responsible persons, versioning. This isn't rocket science, but it requires structure.

Anyone who classifies and documents with structure may still make the wrong decision in some cases. But they make it in a grounded, traceable, and correctable way. And that is exactly what authorities, auditors, and courts examine when it matters.


This article is for general information purposes only and does not constitute legal advice. The liability and regulatory consequences of a misclassification depend on the specific individual case. If in doubt, we recommend involving specialized legal counsel. Last updated: April 2026.

Tags

EU AI ActKI-ComplianceRisikoklassen
Y

Yannick | Simpleact Team

Author · SimpleAct Team

Yannick Heisler

Yannick Heisler

Vertrieb · Persönliche Beratung