Skip to main content

Command Palette

Search for a command to run...

AI Incident Response for Healthcare Providers: A Practical Playbook Under EU AI Act and MDR

AI incident response playbook for European healthcare under EU AI Act and MDR. Steps, roles, timelines, and documentation for clinical directors.

Updated
9 min read
AI Incident Response for Healthcare Providers: A Practical Playbook Under EU AI Act and MDR

TL;DR: AI incident response playbook for European healthcare under EU AI Act and MDR. Steps, roles, timelines, and documentation for clinical directors.

Healthcare AI incidents do not look like IT outages. A software system goes down: you restore it. An AI system produces a wrong output that influences a clinical decision: you need to know what happened, who was affected, what decisions were made based on that output, and what you are legally required to report.

European healthcare providers using AI are now operating under two overlapping frameworks: the EU AI Act (in force since August 2024, with high-risk requirements applying from August 2026) and the Medical Device Regulation for any AI classified as a medical device. This playbook translates those frameworks into a concrete incident response process for clinical directors, compliance officers, and operations managers at small and mid-sized healthcare organizations.

What counts as an AI incident in healthcare

The EU AI Act defines a "serious incident" for high-risk AI systems as one that leads to, or could lead to:

  • Death or serious irreversible deterioration of health
  • Serious injury
  • Damage to property or the environment

For healthcare AI, this is a broad definition. An AI-assisted triage tool that systematically underweights a symptom pattern. A diagnostic support system that flags incorrect risk scores for a patient population. A scheduling algorithm that delays high-priority cases. Any of these could qualify.

Below the serious incident threshold, healthcare AI systems generate anomalies: outputs that are unexpected, inconsistent with the clinical record, or flagged by clinical staff as incorrect. These are not necessarily reportable, but they need to be logged and investigated.

The three-layer regulatory stack for healthcare AI

Healthcare providers in Europe face three layers of AI governance:

Layer 1: GDPR applies to any AI system that processes personal health data. An incident involving personal data (including a data breach caused by an AI system, or an AI output that reveals health data about a person without authorization) triggers GDPR reporting obligations: notification to the supervisory authority within 72 hours if there is a risk to individuals, and notification to affected individuals where the risk is high.

Layer 2: EU AI Act applies to high-risk AI systems (Annex III, healthcare category). Providers deploying these systems must log, monitor, and report serious incidents to the national market surveillance authority. The EU AI Act also requires maintaining logs sufficient to reconstruct the circumstances of an incident.

Layer 3: MDR (Medical Device Regulation 2017/745) applies to AI systems classified as medical devices (software as a medical device, SaMD). MDR requires reporting serious incidents and field safety corrective actions to competent authorities, with specific timelines.

Not all healthcare AI triggers all three layers. A scheduling tool that does not process clinical data may only trigger GDPR. A diagnostic support tool classified as a medical device triggers all three.

The incident response process

Step 1: Detection and containment (0-4 hours)

When an anomaly or incident is identified:

  1. Document the output: save the exact AI output that was flagged, including the timestamp, user ID (if applicable), and the patient encounter or case reference (without unnecessary PII in the incident log).
  2. Assess the blast radius: how many patient cases or clinical decisions were affected by this output pattern? Is this a single-case anomaly or a systematic issue?
  3. Pause or shadow-mode the system if a systematic issue is suspected. Do not wait for root cause analysis to contain a system producing potentially harmful outputs.
  4. Notify the clinical governance lead and the data protection officer (DPO) within 4 hours of detection.

The DPO makes the initial call on whether this triggers a GDPR notification obligation.

Step 2: Assessment and reporting (4-72 hours)

Clinical assessment: the clinical governance lead (or clinical director) assesses whether any patient care decisions were affected and whether any harm occurred or could have occurred. This assessment goes into the incident record.

Technical assessment: the AI system vendor (or internal technical team) identifies the root cause. For commercial AI tools, the vendor has their own reporting obligations under the EU AI Act. They must notify the national authority. The healthcare provider has independent obligations regardless of what the vendor does.

GDPR decision: if personal health data was processed incorrectly, shared without authorization, or if the incident constitutes a personal data breach, the DPO notifies the supervisory authority within 72 hours. In the Netherlands this is the Autoriteit Persoonsgegevens; in Germany, the state-level data protection authority; in France, the CNIL.

EU AI Act decision: for high-risk AI systems, the provider notifies the national market surveillance authority when a serious incident has occurred or when a malfunction of the AI system is discovered that could lead to a serious incident.

Step 3: Root cause and corrective action (72 hours - 30 days)

The root cause analysis answers three questions:

  1. What was the failure mode? (Model drift, incorrect training data, incorrect integration, user error, infrastructure failure)
  2. What governance controls failed to catch it before it reached a patient-affecting outcome?
  3. What change is required to prevent recurrence?

Corrective actions typically fall into one of three categories:

  • Technical: retraining, recalibration, integration fix, or vendor patch
  • Operational: process change (requiring a second clinical review for AI outputs above a certain risk threshold), staff training, or workflow adjustment
  • Procurement: if the vendor's system has a fundamental safety problem, procurement review and possible replacement

Step 4: Documentation and regulatory closure

Every incident requires a closed incident record containing:

  • Date, time, and source of detection
  • Clinical assessment (was patient harm caused, possible, or ruled out)
  • Regulatory notifications made and dates
  • Root cause summary
  • Corrective actions taken
  • Sign-off from the clinical governance lead and DPO

This record is retained for the period required by MDR (minimum 10 years for most medical devices) and made available to regulatory authorities on request.

Roles and responsibilities

RoleIncident responsibility
Clinical governance lead / clinical directorOwns the clinical impact assessment and corrective action decision
Data protection officer (DPO)Owns GDPR assessment and supervisory authority notification
IT / technical leadOwns the technical root cause investigation
Operations managerCoordinates the response timeline and documentation
Vendor contactProvides technical logs and root cause support; handles their own regulatory notifications

For a 15-person clinic or a small healthcare technology company with 30 employees, these roles may overlap. The DPO may be external (shared DPO service). The clinical governance lead may also be the clinical director. The important thing is that each responsibility has a named person assigned before an incident happens.

Minimum viable incident response kit

A small healthcare provider can be ready for AI incidents with four documents:

  1. AI system inventory: which AI systems are in use, their risk classification (high-risk/limited-risk/minimal-risk under EU AI Act), and whether they are classified as medical devices.
  2. Incident log template: a structured form capturing the fields listed in Step 4 above.
  3. Regulatory contact list: the national supervisory authority (GDPR), the national market surveillance authority (EU AI Act), and any notified body contacts for MDR-classified devices.
  4. Escalation contact list: names and contact details for the clinical governance lead, DPO, technical lead, and relevant vendor contacts.

This is the minimum. Healthcare organizations in higher-risk AI use cases (diagnostic AI, patient monitoring, surgical assistance) should have a full incident response procedure documented and tested.

FAQ

When must a healthcare provider notify under the EU AI Act vs GDPR?

GDPR notification is triggered when a personal data breach occurs (within 72 hours to the supervisory authority). EU AI Act notification is triggered when a serious incident with a high-risk AI system occurs. These can overlap: an AI system producing a data breach AND a clinical incident requires both notifications. They are independent obligations with different recipients.

Does this apply to AI tools we use for administrative purposes?

Administrative AI (scheduling, billing, HR screening) is generally not classified as high-risk under the EU AI Act unless it processes health data in a way that affects patient care. GDPR still applies to any administrative tool processing employee or patient personal data. Check the EU AI Act Annex III and consult your DPO for a definitive classification.

Our AI tool vendor says they handle regulatory reporting. Are we still responsible?

The EU AI Act creates parallel obligations. The vendor (as the provider of the AI system) has their own notification obligations. The healthcare organization (as the deployer) has independent obligations. You cannot delegate your reporting obligation to the vendor. Both parties must report independently when their obligations are triggered.

How does this interact with clinical governance processes we already have?

AI incidents should be integrated into existing clinical governance processes, not run in parallel. If your organization has a clinical incident reporting system (such as a Datix or equivalent), AI incidents should be logged there, with an additional AI-specific module for the technical and regulatory fields. This avoids creating a separate silo of AI incidents invisible to the clinical governance function.

Further Reading