Legal logic
SimpleAct derives role, deployment context, and review cadence per AI system. Teams see early which obligations apply and when a system must be reviewed again.
- Review owner and cadence
- Role model per system
- Traceable obligation profile
SimpleAct is the operational AI governance platform for the EU AI Act: capture, assess, and document AI systems – with governance workflows, incident management, runtime monitoring, and integrations for Jira, Teams, or ServiceNow. Everything in one system, audit-ready.
No credit card required · Demo available to watch
The AI Act requires companies to demonstrate where and how AI is used, whether use is permitted, and whether sensitive data is processed – or face significant penalties.
With proper documentation you stay on the safe side. Without proof, fines of up to 7% of global annual turnover (max. €35M) can be imposed. With SimpleAct you avoid these risks.
High-risk AI systems must be fully documented by 2 August 2026. With SimpleAct we support you with documentation so you can avoid these risks.
Most companies do not know which AI systems their employees use, whether sensitive data is processed, or whether AI makes decisions itself. With SimpleAct you avoid these risks.
Capturing AI use without a system means: unstructured Excel lists, no systematic risk assessment, no version control. With SimpleAct you avoid these risks.
From 2 August 2026, companies must be able to demonstrate on request which AI systems they use, how they are used, and whether use is permitted. Without documentation, audits and fines are at risk.
💡 Acting now means creating transparency and documenting automatically – not panicking when authorities request evidence.
Answer 5 short questions in under 1 minute and find out whether your company needs AI documentation.
e.g. ChatGPT, Microsoft Copilot, recruiting software, CRM with AI, internal AI solutions
⚖️ This check is not legal advice. If in doubt we recommend legal review.
SimpleAct provides a central platform for structured capture, rule-based assessment, and auditable documentation of all AI systems under the EU AI Act.
Capture all AI systems in use centrally: name, provider, description, category (internal/external), role, purposes, affected areas, responsible person, email.
Each AI system is assessed via a structured questionnaire and automatically classified rule-based under the EU AI Act. You can re-assess any system at any time.
Compliance checklists per risk class with EU AI Act article references. Additional documentation for high-risk systems. Structured compliance report exportable with versioning. Changes are recorded.
The report includes organisation profile, report metadata, and per AI system: master data, last review, risk assessment, compliance checklist (status, open items), documentation entries. Finally the audit history with a chronological list of changes (timestamp, action, entity, user).
Free tools help with initial assessment. SimpleAct is your permanent compliance solution.
The platform exposes the work already present in the product: from legal logic through governance and audit playbook to incidents, runtime signals, assurance workflows, and API connectivity.
SimpleAct derives role, deployment context, and review cadence per AI system. Teams see early which obligations apply and when a system must be reviewed again.
Owner, reviewer, approver, evidence, and approvals stay inside one governance flow. Critical decisions do not disappear into email or isolated files.
The audit playbook turns articles and gaps into concrete actions. Teams see owners, SLAs, due dates, open points, and the direct path to missing evidence.
Incidents are not just logged. SimpleAct keeps status, reassessment triggers, compliance gate, CAPA, and authority-response cases in one place.
Monitoring templates, runtime signals, change register, and observability profiles show what happens after deployment and what follow-up work is required.
For demanding setups, SimpleAct combines dataset register, bias findings, human oversight, validation suites, pipeline gates, registry, and authority packs.
API keys, webhooks, OpenAPI, ingestion endpoints, and integrations into Jira, Teams, or ServiceNow help connect governance and runtime work from the platform.
The base layer remains: AI inventory, structured questions, risk logic, compliance checklists, and exportable evidence. This layer feeds all downstream modules.
SimpleAct Operating Model
That is what separates SimpleAct from checklist-only or register-only tools. The platform connects assessment, control, operations, and evidence into one visible workflow.
SimpleAct connects obligations, tasks, evidence, incidents, runtime signals, and integrations. Teams work inside one connected flow instead of isolated tools.
Operational product flow
Inside the product, legal logic, governance, audit playbook, incident management, and runtime monitoring work together. That keeps it visible per AI system what has been reviewed, what remains open, and which actions come next.
Step 1
Teams capture AI systems, place them into business context, and create the base layer for everything that follows.
Step 2
SimpleAct keeps role, recurring review questions, and review ownership visible per system.
Step 3
Owners, reviewers, open points, missing evidence, and approvals stay visible as one operational workflow.
Step 4
After rollout, runtime signals, incidents, changes, and CAPA measures stay attached to the affected system instead of separate ticket silos.
Operational Artifacts
Files, links, notes, and approvals per system or topic.
Version, lineage, bias findings, and personal-data relation.
Benchmarks, revalidation triggers, red teaming, and shadow mode.
Metrics, alert thresholds, sources, dashboard URL, and on-call roles.
Conformity, CE, EU database, contacts, and supporting artifacts.
Connect governance, incident, and monitoring flows into external systems.
This content is not based on a theoretical roadmap. It follows modules, forms, and workflows that already exist in the product.
Owner, reviewer, approver, minimum approved evidence, and finalization gates per subject.
Dataset register, bias findings, human oversight, validation suites, pipeline gates, authority pack, and registry.
Runtime signals, incident records, reassessment triggers, change register, CAPA, and compliance gate form one operating chain.
API keys, webhooks, ingestion endpoints, plus Jira, Teams, and ServiceNow connectivity for enterprise setups.
Not just inventory and export: the platform connects legal logic, governance, audit playbooks, runtime signals, and incidents into one operating flow.
A new system moves from classification to approval through one connected product flow.
Model changes and runtime signals do not stay isolated. They create review work and actions.
Incidents are closed through CAPA, reassessment, and authority packs in the same system.
Anyone who wants the deeper explanation of what defines an AI governance system can open the full positioning and process page there.
From capture to auditable evidence – and beyond: governance workflows, runtime monitoring, and incident management as your permanent operating system for your AI stack.
Sign up and enter basic company data and the person responsible for your AI compliance management.
Enter all AI tools in use: ChatGPT, VS Code AI, Canva AI, internal applications. Capture takes only 1–2 minutes per system.
Answer guided questions about your AI system. Based on your answers, the system automatically determines the appropriate risk class – no legal expertise required.
Depending on risk class, a specific compliance checklist is shown: Minimal Risk (basic documentation), Limited Risk, High Risk. Each checklist includes EU AI Act article references.
After setup, the real governance begins: dashboard, audit playbook, incident management with CAPA, runtime monitoring with signals and change register, assurance workflows with bias findings and validation suites – all connected, all audit-ready.
Quick onboarding – then governance, monitoring, and incident management run permanently as your AI operating system
From €159/month when billed annually, €199/month monthly. A compliance solution that takes you through structured AI documentation. No surprises.
For small teams
For growing companies
For large organisations
For a fast product check
You start with the full Starter setup, can create real data, and test the platform under realistic conditions. After 30 days the trial does not silently stop; it transitions into the Starter subscription unless you cancel in time.
Start 30-day trialFor selected companies we enable a guided pilot mode. This is not an open self-serve plan, but an approved pilot with defined scope, owners, and a clear go/no-go decision at the end.
By request only
Pilot projects are enabled individually and after the period either transition into a regular subscription or end cleanly.
See pilot projectQuestions about pricing? See our FAQ
As a German company we take data protection and security seriously. Made in Germany means the highest standards with no compromise.
Security & InfrastructureEuropean data protection
German quality
All your data is stored exclusively on German servers in Nuremberg. Backups are held in Falkenstein (Hetzner). No cloud providers outside the EU.
End-to-end encryption and regular security reviews for the highest data security.
Our team has experience in data protection (GDPR) and AI governance.
Developed and hosted in Germany with a focus on data protection and reliability.
Fresh updates on the EU AI Act, AI compliance, and practical implementation guidance.