Be honest: Do you actually know how many AI tools are running in your company right now?
Most people we ask pause for a moment – and then say something like: "A few people use ChatGPT, and we just rolled out Copilot." But when we take a closer look together, it's suddenly eight, ten, or twelve systems. Canva AI in marketing, an AI feature buried in the CRM, a recruiting tool that "does something with AI."
That's not a criticism. That's just everyday life in most companies.
But this everyday reality is about to become a regulatory issue. On August 2, 2026, the EU AI Act – the world's first comprehensive AI regulation – takes full effect. From that date, companies must be able to prove on request which AI systems they use, how they're deployed, and whether their use is lawful. If you can't, you're looking at serious fines.
In this article, we'll walk through the five AI tools we see most often in companies – and classify them according to the EU AI Act risk categories. Not in the abstract, but the way it actually plays out in practice.
A Quick Primer: How Risk Classification Works
Before we get into the individual tools, it's worth understanding the basic principle. The EU AI Act uses a risk-based approach and divides AI systems into four categories – depending on how significantly they can impact people's lives.
Unacceptable Risk – Full stop. AI systems that manipulate people or engage in social scoring are simply banned. These prohibitions have been in force since February 2025.
High Risk – AI that makes decisions about people: in recruitment, healthcare, credit scoring. These systems face the strictest requirements – from technical documentation to mandatory human oversight.
Limited Risk – AI that interacts with people or generates content. The core obligation here is transparency. Users need to know they're dealing with AI.
Minimal Risk – The vast majority of AI applications. Hardly any additional regulatory requirements, but since February 2025, one rule applies regardless: anyone using AI must ensure their employees are properly trained (Article 4, AI literacy).
Here's what many people miss: It's not the technology that determines the risk class – it's the context of use. The same tool can be classified very differently depending on its purpose. An image recognition algorithm that sorts cat photos is a completely different story from one that evaluates job applicants' headshots.
1. ChatGPT – The Quiet Workhorse
Typical use: Marketing copy, email drafts, summaries, brainstorming, code assistance
Risk class: Limited Risk – in most cases
If there's one AI tool that has truly arrived everywhere, it's ChatGPT. In marketing, sales, development – and yes, in the C-suite too. Sometimes it's officially sanctioned, sometimes people just quietly use it on their own (that's what the industry calls "Shadow AI").
For standard use cases – writing text, gathering ideas, summarizing something – ChatGPT falls under "limited risk." That sounds relaxed, but it still means you need to document that it's being used and be transparent about where AI-generated content shows up.
Things get interesting when ChatGPT starts being used for more sensitive tasks. Summarizing job applications? Analyzing customer complaints? That's when the risk class can shift quickly – and with it, the compliance requirements.
Our advice: Register ChatGPT as an AI system, set clear usage policies (especially: what data is allowed in?), and train your employees. It's not a massive undertaking – but it needs to happen.
2. Microsoft 365 Copilot – The AI That's Already Everywhere
Typical use: Summarizing emails, creating presentations, analyzing data in Excel, meeting notes in Teams
Risk class: Limited Risk – for standard office use
Copilot is a special case because it's so deeply embedded in the Microsoft ecosystem. It sits in Outlook, PowerPoint, Excel, Teams – and accesses company data stored in SharePoint and OneDrive. For everyday office work, it falls under limited risk.
But the real issue with Copilot isn't so much the risk class – it's the question of what data this tool can actually see. Many companies discover after rollout that Copilot finds documents some employees weren't supposed to know about. That's primarily a data protection concern (GDPR), but it's equally relevant for AI documentation under the EU AI Act.
Our advice: Register Copilot centrally, review your permission settings, and check regularly whether the usage context has changed. With Copilot, that happens faster than you'd think – because Microsoft keeps shipping new features.
3. Canva AI / Adobe Firefly – When Marketing Meets AI
Typical use: Social media graphics, presentation design, AI-generated images, background removal
Risk class: Limited Risk
In nearly every marketing department we've worked with, AI-powered design tools have become standard. Canva AI and Adobe Firefly lead the pack – generating images, removing backgrounds, creating design suggestions from text prompts.
Both fall under "limited risk" because they produce synthetic content. And here's an obligation many companies underestimate: AI-generated content must be labeled. Sounds simple, but in practice it's often patchy. Who makes sure the social media team marks every AI-generated image? Who checks the advertising materials?
Our advice: Create a straightforward internal policy for labeling AI content – and train your marketing team. It's not a big effort, but it creates clarity. And of course, document Canva AI / Firefly as an AI system.
4. AI in Recruiting – This Is Where It Gets Serious
Typical use: Automated resume screening, AI-driven shortlisting, video analysis, matching algorithms
Risk class: High Risk
Now we've reached the point where most people sit up and pay attention. The EU AI Act explicitly classifies AI systems used in recruitment as high risk – specifically when they influence decisions about hiring, promotion, or termination. It's spelled out in Annex III of the regulation.
Why so strict? Because AI is co-deciding people's livelihoods here. And because algorithmic bias in recruiting tools is well documented by now. A system that systematically disadvantages certain groups of applicants wouldn't just be unfair – it would be a fundamental rights violation.
For companies, this means: If you're using an AI-powered recruiting tool (whether it's HireVue, Workday AI, Greenhouse, or any other), you need to meet the EU AI Act's strictest requirements. Risk management system, technical documentation, data quality checks, human oversight – the full package.
Our advice: Take a close look at your HR tools. There's often more AI under the hood than meets the eye. An "intelligent matching" feature or "automated shortlisting" is, in most cases, AI within the meaning of the EU AI Act – and therefore potentially high risk.
5. CRM Systems with AI – The Underestimated Blind Spot
Typical use: Lead scoring, customer segmentation, churn prediction, automated campaigns, chatbots
Risk class: Minimal to Limited Risk – with exceptions
Salesforce Einstein, HubSpot AI, Pipedrive Smart Features – virtually every modern CRM now has AI capabilities built in. And most users have no idea just how much AI is running under the hood.
For typical marketing and sales use cases (lead scoring, segmentation, campaign optimization), the risk falls in the minimal to limited range. But that changes quickly when the CRM starts making decisions with real consequences for people: automated credit assessments, price discrimination, or fully automated rejections.
And then there are the chatbots. Nearly every CRM now offers an AI chatbot for customer service. That falls under at least limited risk – and users must be able to tell they're talking to AI, not a human.
Our advice: Do an inventory of all AI features in your CRM. There are usually more than you think. And ask yourself specifically: Is this system making any automated decisions that affect real people?
At a Glance: Risk Classes Overview
So What Now? The First Step Is Easier Than You Think
If you've read this far, you probably already know more about AI risk classes than most of your competitors. That's a good start – but knowledge alone won't keep you compliant.
The good news: Most AI tools in everyday business fall under "minimal" or "limited" risk. That doesn't mean a months-long compliance project – it means structured documentation. But that documentation needs to actually exist by August 2026 at the latest.
And regardless of risk class, one thing already applies right now: Companies need to know which AI systems are in use. The AI literacy obligation (Art. 4) has been active since February 2025.
The first step is always the same: Get a clear picture. Which AI tools are running in your company? In which departments? For what purposes?
That's exactly what we built SimpleAct for. Our platform walks you through the entire process: register AI systems, classify them using a rule-based approach, document them with compliance checklists – and export a ready-made audit report whenever you need one. In 2–3 hours, you'll have your first complete documentation. No legal jargon, no spreadsheet chaos.
This article is for general information purposes only and does not constitute legal advice. The risk classification of your AI systems should always be based on the specific context of use. If in doubt, we recommend seeking legal counsel.
About SimpleAct: SimpleAct is a German compliance platform that helps companies structurally document their AI systems in accordance with the EU AI Act. From registration to risk assessment to exportable audit reports – all in one place.
Tags
Yannick | SimpleAct Team
Author · SimpleAct Team
