Skip to main content

Command Palette

Search for a command to run...

AI Incident Response Playbook for European SMEs: A Step-by-Step Guide to GDPR and EU AI Act Obligations

When AI causes harm, most European SMEs have no response procedure. This step-by-step playbook maps incident types to GDPR and EU AI Act notification time…

Updated
13 min read
AI Incident Response Playbook for European SMEs: A Step-by-Step Guide to GDPR and EU AI Act Obligations

TL;DR: When AI causes harm, most European SMEs have no response procedure. This step-by-step playbook maps incident types to GDPR and EU AI Act notification time…

Most European SMEs have a data breach response procedure. Very few have an AI incident response procedure. That gap is becoming a liability.

The two are not the same thing. A data breach is a specific, well-defined event with a clear legal trigger: personal data has been compromised. An AI incident is broader, less predictable, and potentially more damaging. It might be a data breach caused by an AI vendor mishandling training data. It might be an AI system that produced a discriminatory hiring decision. It might be a hallucinated AI output that found its way into a client contract. It might be an employee using an unapproved AI tool that silently exfiltrated sensitive files.

Each of these scenarios carries different legal obligations, different timelines, and different remediation paths. None of them fit cleanly into a standard IT incident response template.

This playbook gives CEOs, CTOs, DPOs, and Heads of Operations a structured approach to AI incidents — from detection through to regulatory notification and operational learning. It is calibrated to SME scale: actionable with a small team and no dedicated AI legal counsel.


The Foundation: What Counts as an AI Incident

Before you can respond to an AI incident, you need a shared definition of what one is. This is not a bureaucratic exercise. Without a clear definition in your AI use policy, staff will not know when to escalate, and you will not have a consistent record for regulatory audit.

A working definition for most European SMEs: an AI incident is any event where an AI system produces, enables, or contributes to an outcome that causes harm or creates legal, regulatory, or reputational risk for the organisation or its stakeholders.

Harm includes: personal data exposure, discriminatory treatment, material financial loss, operational disruption, and reputational damage. Legal risk includes: regulatory notification obligations, contractual breach, and exposure to subject access requests or data subject complaints.

This definition needs to be embedded in your AI use policy. Without it, staff lack the threshold for escalation, and incidents go unreported until they become crises.


The Five Incident Categories Every SME Should Plan For

Incident response planning works best when it is taxonomy-driven. Generic incident checklists produce generic responses. The following five categories cover the AI incident types most commonly encountered by European SMEs and map to distinct regulatory obligations.

Category 1: Data breach via AI tool. A third-party AI tool processes personal data without an adequate Data Processing Agreement, or processes it beyond the scope of the DPA in place. This is a GDPR Article 33 notifiable event if the breach meets the threshold: a risk to the rights and freedoms of natural persons. The 72-hour notification clock starts from when the controller has a reasonable degree of certainty that a breach has occurred — not from discovery of every detail.

Category 2: Discriminatory AI output. An AI system produces an output that disadvantages an individual or group based on a protected characteristic — in hiring, pricing, customer service triage, credit assessment, or product allocation. This category does not always trigger immediate regulatory notification, but it creates exposure under GDPR Article 22 (automated decision-making), the EU AI Act (if the system is high-risk), and sector-specific equality obligations. It requires containment, a bias investigation, and documented remediation.

Category 3: Hallucination causing material harm. An AI system generates plausible but incorrect content — a contract clause, a financial figure, a medical summary, a legal analysis — that is used without adequate review and causes material harm to the business or a third party. The regulatory implications depend on context: if personal data was involved, GDPR may apply; if the output influenced a regulated activity, sector obligations apply. In all cases, this is an operational failure requiring root cause analysis and a change to the review workflow.

Category 4: Shadow AI incident. An employee uses an unapproved AI tool, typically a consumer-facing generative AI application, in a way that exposes company data, client data, or personal data to a third-party vendor without consent or a DPA. Shadow AI is the most common source of unplanned AI incidents in SMEs and the hardest to detect without monitoring controls. If the data exposed includes personal data, this is a potential GDPR breach.

Category 5: AI system failure causing operational disruption. An AI system that supports a core operational process — customer communication, scheduling, document processing, fraud detection — fails in a way that disrupts service delivery. This category is primarily an operational resilience issue, but if the system is high-risk under the EU AI Act, Article 73 may require serious incident reporting to the national market surveillance authority.


Regulatory Notification Obligations by Incident Type

The notification obligations across these five categories are not uniform. This is the single most important thing a SME needs to understand before an incident occurs, because notification deadlines cannot be met if you are spending the first 24 hours working out whether you need to notify anyone.

GDPR Article 33 requires notification to the supervisory authority within 72 hours of becoming aware of a personal data breach, unless the breach is unlikely to result in a risk to the rights and freedoms of individuals. The notification must include: the nature of the breach, categories and approximate number of data subjects affected, the likely consequences, and the measures taken or proposed. If full information is not available within 72 hours, notification proceeds with available information and is completed in phases.

GDPR Article 34 requires direct notification to affected data subjects when the breach is likely to result in a high risk to their rights and freedoms. This is a separate obligation from the supervisory authority notification and operates on a "without undue delay" timeline rather than 72 hours.

EU AI Act Article 73 requires providers and deployers of high-risk AI systems to report serious incidents to the relevant national market surveillance authority. A serious incident is defined as one that results in the death of a person, a serious health or safety risk, or an infringement of fundamental rights. The reporting timeline under Article 73 is not the GDPR 72-hour window — it follows the market surveillance authority's procedural requirements, which vary by member state.

For Categories 1 and 4 (data breaches), GDPR Article 33 is the primary clock. For Category 2 (discriminatory output from a high-risk system), EU AI Act Article 73 may apply depending on severity. For Category 3 (hallucination causing harm), notification obligations depend on whether personal data was involved and the nature of the harm. For Category 5 (system failure), EU AI Act Article 73 applies if the system is high-risk and the disruption caused a serious incident as defined by the Act.


The Playbook: Detect → Contain → Assess → Notify → Remediate → Learn

This six-stage structure is adapted to SME operational realities. Each stage has a named owner and a time target. Before an incident occurs, assign these roles.

Stage 1: Detect. Most AI incidents are detected by staff encountering unexpected outputs, clients reporting anomalies, or automated monitoring flagging unusual system behaviour. The AI compliance monitoring checklist provides the detection layer. Without monitoring, incidents surface late. Every staff member must know the escalation path and have a named person to contact. This is not optional — it must be written into the AI use policy.

Stage 2: Contain. Within the first hour, the immediate harm must be stopped from spreading. For a data breach: revoke the AI tool's data access. For a discriminatory output: suspend the decision until reviewed. For a hallucinated output in a client document: quarantine the document and notify the client that a review is underway. For a shadow AI incident: require the employee to log out and change credentials. Containment is not remediation — it is triage.

Stage 3: Assess. Determine the scope and category of the incident. What data was involved? How many individuals are affected? Is there a regulatory notification obligation? This assessment drives all subsequent decisions. Do not skip this stage under pressure — an incorrectly characterised incident leads to the wrong response. Document everything from this point forward.

Stage 4: Notify. Execute notification obligations based on the assessment. If GDPR Article 33 applies, the 72-hour clock is running. Assign one person to own the supervisory authority notification. Prepare a factual, non-speculative account of what is known. Do not delay notification pending full information — partial notification within the deadline is compliant; delayed full notification is not. Notify affected data subjects if Article 34 thresholds are met.

Stage 5: Remediate. Address the root cause, not just the symptom. A hallucination incident caused by inadequate output review requires a change to the review workflow, not just correction of the specific output. A shadow AI incident caused by lack of policy awareness requires training, not just a reprimand. Remediation actions must be documented with an owner and a completion date.

Stage 6: Learn. Every AI incident is a signal. Conduct a structured post-incident review within two weeks. What detection gap allowed this to occur? What policy, technical, or training change would prevent recurrence? Document findings in the incident log and update the relevant governance procedures. Feed findings back into the governance framework.


Your 4-Hour Incident Response Checklist

When an AI incident is first reported, the first four hours determine whether the response is controlled or reactive. This checklist is designed to be run by one person — the designated AI incident owner — without requiring external legal input immediately.

Within 30 minutes: Confirm the incident has occurred and categorise it (use the five categories above). Assign containment lead. Initiate containment actions. Begin documentation — timestamp every action from this point.

Within 1 hour: Complete initial scope assessment. Determine whether personal data is involved (GDPR trigger). Determine whether a high-risk AI system is involved (EU AI Act trigger). Brief the CEO or DPO.

Within 2 hours: Determine whether GDPR Article 33 notification is required. If yes, the 72-hour clock is running — assign the supervisory authority notification to a named owner. If uncertain, treat it as required until legal advice is obtained. Notify affected internal stakeholders.

Within 4 hours: Containment confirmed and documented. Notification decision made and documented. If client notification is required, draft communication for review. Incident log entry created. If external legal or regulatory support is needed, engaged.


Building the Incident Log

The incident log is your primary evidence artefact for regulatory audit. It is also a learning tool. Structure it to serve both purposes.

Every incident entry should record: the date and time of detection, the date and time the incident owner was notified, the incident category, a factual description of what occurred, the data categories and approximate number of individuals affected, the AI system and vendor involved, containment actions taken with timestamps, notification decisions with justifications, remediation actions with owners and due dates, and the post-incident review outcome.

The log must be retained for a minimum of three years for GDPR purposes, and potentially longer if the incident involved a high-risk AI system subject to EU AI Act record-keeping requirements. Store it with restricted access — it contains sensitive information about system vulnerabilities and data exposures.

Do not use the incident log as a blame record. Its purpose is operational learning and regulatory compliance. Written in that spirit, it becomes one of the most valuable documents in your governance portfolio.


Frequently Asked Questions

What is the GDPR 72-hour notification requirement in practice for a small company?

The 72-hour clock starts from the point at which you have reasonable certainty that a personal data breach has occurred — not from the moment you understand every detail of it. For a small company, this means you need a designated person (the DPO or a named individual if no formal DPO is required) who can make the notification decision quickly. The notification to your national supervisory authority does not need to be complete — it can be phased. What matters is that it is submitted within 72 hours with available information, with a commitment to provide additional details as the investigation proceeds.

Are we required to report every AI incident to a regulator?

No. GDPR breach notification applies only when a personal data breach creates a risk to individuals' rights and freedoms. EU AI Act Article 73 serious incident reporting applies only when a high-risk AI system causes a serious incident as defined by the Act — death, serious health or safety risk, or fundamental rights infringement. Many AI incidents — a hallucinated output, an internal system failure, a shadow AI discovery that did not expose external data — carry no regulatory notification obligation. However, all incidents should be logged internally regardless of notification outcome.

How do we handle a hallucinated AI output that was sent to a client?

First, contain: contact the client promptly, acknowledge the error, and withdraw the document or output. Do not minimise the error or wait to see if the client noticed. Second, assess: did the output contain personal data, financial advice, or contractual terms that created legal exposure? Depending on the nature of the output and the client relationship, this may trigger contractual, regulatory, or professional liability obligations. Third, remediate the workflow that allowed an unreviewed AI output to reach a client. Fourth, document the full response in the incident log.

What counts as a "serious incident" under EU AI Act Article 73?

Article 73 defines a serious incident as one that causes or is reasonably likely to cause the death of a person, a serious and irreversible impairment to health, a serious disruption in critical infrastructure, or a serious infringement of fundamental rights. For most European SMEs, the relevant threshold is the fundamental rights infringement category — discriminatory AI outputs that cause material harm, for example. The definition is higher than the GDPR breach threshold, which means many AI incidents that require GDPR notification will not reach the EU AI Act serious incident threshold. When in doubt, legal advice is warranted.


Further Reading


Need help building an AI incident response procedure before you need it? Talk to our AI governance team