Back to BlogAfter Go-Live: What Deployers of AI Systems Actually Have to Do
Best Practices

After Go-Live: What Deployers of AI Systems Actually Have to Do

An AI system going live isn't the end of compliance, it's the start of ongoing operations. The six obligations that continue, how to organize post-market monitoring under Art. 72, and when a deployer unintentionally becomes a provider.

May 10, 2026
Yannick | SimpleAct Team
8 min read
GO-LiveAI-ActSimpleact

When an AI system goes live, project teams tend to breathe a sigh of relief. Risk classification done, documentation written, training delivered, first compliance report exported. Boxes ticked, on to the next project.

That's exactly the moment when most companies make their first structural mistake in AI compliance. Because go-live isn't the end of obligations. It's the start of ongoing operations. And the EU AI Act has its own specific requirements for that ongoing phase, requirements that many deployers underestimate.

This article shows which obligations continue after go-live, how they can be organized in practice, and why Article 72 (post-market monitoring) is relevant for deployers too, even though it's formally a provider obligation.


The six obligations that continue after go-live

Putting an AI system into production means taking responsibility for its entire lifecycle. The following six obligations are not one-time tasks. They are ongoing processes.


1

Keep AI literacy current (Art. 4)

The AI literacy obligation has been in force since February 2025 and is not a single training event. Employees working with AI systems must have current competency, including when the system changes or new people join the team.

In practice: Establish a training cycle (e.g. annual refresher), document onboarding training for new employees, and maintain a training matrix with date, content, and participants.

2

Actively monitor the system (Art. 26(5))

Deployers of high-risk AI systems must monitor operations based on the provider's instructions for use. This includes ongoing observation of system behavior and assessing whether the system continues to perform as expected.

In practice: Define concrete monitoring KPIs (e.g. accuracy, error rate, bias indicators), set thresholds, and define what happens when they are exceeded. Who reviews? How often? Who is escalated to?

3

Retain logs (Art. 26(6))

Deployers must retain the automatically generated logs of high-risk systems, to the extent these logs are under their control. The retention period depends on the intended purpose, but is at least six months unless other legislation requires longer.

In practice: Clarify with the provider which logs are generated, where they are stored, and who has access. Define a retention policy. Pay attention to data protection requirements, especially for logs containing personal data.

4

Ensure human oversight (Art. 26(2))

High-risk AI systems must not decide autonomously. They require human oversight. Deployers must ensure that the persons assigned to oversight have the necessary competence, authority, and support to intervene.

In practice: Name the persons responsible for oversight. Make sure they understand the system and have the authority to intervene. Document when and how interventions occurred.

5

Report incidents (Art. 26(5) in conjunction with Art. 73)

If a deployer identifies risks under Art. 79 or a serious incident, they must inform the provider, distributor, and the relevant market surveillance authority. Providers then have reporting deadlines from 2 to 15 days, depending on the severity of the incident.

In practice: Establish an internal escalation process: What qualifies as a reportable incident? Who decides? Who reports to whom, in what form, within what timeframe? An escalation guide with examples saves discussions in a real incident.

6

Keep deployment context and risk classification current

As soon as the deployment context, data inputs, or system functionality change, the risk classification must be reassessed. "Limited risk" can become "high risk" if a system suddenly gets used for personnel decisions instead of just marketing copy.

In practice: Define triggers for review: new deployment context, new data source, major provider releases, departmental reorganization. At minimum, an unprompted yearly review.


One obligation nobody thinks about: Information for employees and affected persons

Two obligations particularly often slip through the cracks:

Art. 26(7): When a high-risk AI system is used in the workplace, deployers must inform affected employees and their representatives before deployment. This applies whether or not a works council exists.

Art. 26(11): When a high-risk AI system makes decisions about persons or supports such decisions, the affected persons must be informed that they are subject to such a system.

Both obligations sound abstract, but in audits they are often the first ones asked about. They are easy to verify: Is there a documented information notice? When was it sent? To whom?


How to organize these obligations in practice

Six ongoing obligations sound like a lot. In practice, they fit well into a simple rhythm if responsibilities are clear.

Frequency Task Typical owner
Continuous System monitoring (KPIs, anomalies) Business unit + IT
On incidents Escalation, reporting to provider and authorities AI compliance owner
Quarterly Review of logs, monitoring reports, incidents AI compliance + DPO
On change Reassess risk class, update documentation AI compliance + business unit
Annually Training refresher, unprompted compliance review HR + AI compliance

Practical tip: Attach AI compliance reviews to existing processes, like the quarterly data protection meeting or the annual audit. This reduces effort and ensures the topics actually get addressed.


Article 72: Why post-market monitoring affects deployers too

Article 72 of the EU AI Act regulates post-market monitoring, the surveillance of high-risk AI systems after they have been placed on the market. Formally, this is a provider obligation: providers must establish a documented monitoring system that systematically captures and analyzes the performance of their AI systems across the entire lifecycle.

Specifically, Art. 72 requires:

Para. 1: Providers must establish a post-market monitoring system that is proportionate to the nature and risks of the AI technology.

Para. 2: The monitoring system actively and systematically collects, documents, and analyzes relevant data on system performance. This data may explicitly be "provided by deployers or collected through other sources".

Para. 3: The system is based on a post-market monitoring plan, which forms part of the technical documentation under Annex IV. The EU Commission is developing a template for this plan.

Para. 2, last sentence: The monitoring obligation does not extend to sensitive operational data of deployers that are law enforcement authorities.

The critical point for deployers is in paragraph 2: providers collect data that is provided by deployers. In other words, even though deployers don't have to run their own post-market monitoring under Art. 72, they are the central data source for their providers' monitoring. A provider unable to fulfill their Art. 72 obligations because they don't get data from their deployers has a compliance problem. So do the deployers who refuse data delivery or supply it unstructured.


What deployers should concretely do to support Art. 72

Four practical consequences for deployers follow from how the law is structured:

1. Know your provider's post-market monitoring plan. Ask for it actively. What data does your provider expect from you? In what format? At what frequency? If the provider can't produce a plan, that's a warning sign.

2. Make data delivery systematic. Define who in your company reports performance data to the provider, at what frequency, through what channel. This can be an automated data stream, a quarterly report, or incident-based reporting.

3. Mirror the monitoring internally. What your provider collects about you, you should be seeing too. An internal monitoring dashboard with the same KPIs you report to the provider creates transparency and supports productive conversations when something looks off.

4. Get contractual clarity. Your contract with the provider should specify which monitoring data you deliver, who owns this data, and what reaction obligations the provider has when something looks anomalous. Standard terms typically don't cover this.


When a deployer becomes a provider

An important gray area many companies underestimate: anyone who substantially modifies an AI system can themselves become a provider under Art. 25 of the EU AI Act, with all the obligations that come with it, including Art. 72.

Substantial changes include in particular:

marketing the system under your own name or brand;

a substantial modification of an already placed high-risk system;

changing the intended purpose of an AI system in a way that makes it a high-risk system.

In all of these cases, a pure deployer becomes a provider in the sense of the AI Act. "We just use the system" turns into "we carry provider responsibility", including technical documentation, conformity assessment, and your own post-market monitoring plan.

Practical example: A company purchases a generic LLM API and builds an internal recruiting assistant on top that pre-screens candidate profiles. The original API was not intended for recruiting. By changing the purpose, the company likely becomes the provider of a high-risk AI system, with all provider obligations.


Summary

Go-live is the point where AI compliance shifts from project to process. Six obligations continue: AI literacy, monitoring, logs, human oversight, incident reporting, updating risk classification. Then there are the often overlooked information obligations toward employees and affected persons.

Article 72 is formally a provider obligation but has direct consequences for deployers. Anyone using a high-risk AI system is a data source for their provider's post-market monitoring. Ignoring this hurts not only the provider but also creates compliance problems for the deployer in audits.

And anyone who substantially modifies a system or uses it for a different purpose may switch roles entirely: from deployer to provider, with all the obligations that come along.


Document ongoing operations with structure

SimpleAct turns ongoing obligations into a repeatable process. AI registry, monitoring reviews, incident reports, audit trail. All in one place.

Get started for free →

This article is for general information purposes only and does not constitute legal advice. Requirements may change with the planned Digital Omnibus on AI. Last updated: April 2026.


About SimpleAct: SimpleAct is a German compliance platform that helps companies structurally document their AI systems in accordance with the EU AI Act. From registration to risk assessment to exportable audit reports. All in one place.

Learn more →

Tags

GO-LiveAI-ActSimpleact
Y

Yannick | SimpleAct Team

Author · SimpleAct Team

Yannick Heisler

Yannick Heisler

Vertrieb · Persönliche Beratung