Risk assessment
Learn how to perform structured risk assessments for AI systems.
Risk assessment is the core of your EU AI Act documentation. SimpleAct assigns each system to an appropriate risk class using a structured questionnaire.
Information
The classification is rule-based and depends on your answers. Precise wording around use context and impact is therefore essential.
The three assessment blocks
Section A: High-risk triggers
Checks sensitive areas such as employment, credit, health, or other legally relevant decisions.
Section B: Limited-risk triggers
Checks transparency duties, for example for chatbots, generated content, or end-user interaction.
Section C: Usage context
Captures autonomy level, internal or external effect, and the concrete use context.
- 1
Select system
Start the assessment from an already captured inventory entry.
- Select the system in the inventory
- Review the use context
- Check previous assessments
- 2
Answer high-risk triggers
Assess potential impacts on people, rights, decisions, and sensitive data.
- Decisions affecting people
- Legal or equivalent effect
- Use of sensitive data
- 3
Review limited-risk triggers
Review transparency and disclosure duties for interaction and content generation.
- Chatbot interaction
- Generated content
- Biometric or emotional features
- 4
Complete context
Add organisational context, controls, and approvals.
- Internal or external use
- Supportive or autonomous
- Controls and approvals
- 5
Run the assessment
Save the answers and let the system calculate the risk class automatically.
- Review the result
- Document the risk class
- Trigger follow-up checklist work
Risk classes
Minimal risk
Baseline documentation, clear purpose description, and traceable ownership.
Limited risk
Additional transparency and disclosure duties, especially for interaction or content generation.
High risk
Expanded duties such as risk management, data governance, technical documentation, and human oversight.
Reassess and version
Create a new assessment whenever use, data sources, or system boundaries change. That keeps the history auditable.