The Monthly AI Governance Review: A 90-Minute Template for European SME Leaders
A 90-minute monthly AI governance review template for European SME operations leaders. Covers tool owner checks, incident logging, and leadership escalati…
TL;DR: A 90-minute monthly AI governance review template for European SME operations leaders. Covers tool owner checks, incident logging, and leadership escalati…
Governance that only happens annually is not governance — it is documentation. By the time you conduct your yearly review, the tools your team uses have changed, new shadow AI has emerged, and whatever you documented twelve months ago is too stale to reflect how AI is actually operating in your business.
Monthly reviews are the difference between a governance framework that functions and one that exists on paper. This template gives operations leaders, CTOs, and designated tool owners a structured 90-minute review they can run on the first working Monday of each month. It requires no external support, produces a consistent evidence trail, and connects directly to the quarterly leadership review where escalations are resolved.
The format assumes your organisation has an AI register — a living inventory of every AI tool in use, with ownership assigned per tool. If you do not have one yet, build it first using the AI governance framework for European SMEs before attempting this review.
Before You Start: What This Review Is Not
This review is not a performance evaluation of your team. It is not a vendor audit. It is not an opportunity to add new tools or redesign workflows. The monthly review has one purpose: verify that AI tools already in your register are operating within their defined parameters and that any anomalies are logged and escalated appropriately.
Scope creep in monthly governance reviews is the most common reason they get cancelled. If reviewers feel the monthly meeting keeps expanding into strategy or procurement, they will reschedule it indefinitely. Keep the scope narrow and the format fixed.
Section 1: Register Currency Check (15 minutes)
Open your AI register and confirm it matches the current tool landscape.
Check 1.1 — New tools. Ask: has any team member started using an AI tool in the past 30 days that is not yet in the register? Common sources of new additions: software updates that added AI features, free-tier trials started by individual team members, and AI tools embedded in services you already use. If new tools are discovered, add them to the register now and assign an owner before moving to Section 2.
Check 1.2 — Retired tools. Has any tool been switched off, migrated from, or stopped being used since the last review? Update the register status to "decommissioned" with a date. This matters for regulatory documentation — keeping active entries for tools no longer in use creates confusion during an audit.
Check 1.3 — Ownership gaps. Is every active tool in the register assigned to a current employee? Staff turnover and role changes can leave tools without an active owner. If any tool has no assigned owner or the assigned owner has left the organisation, reassign now.
Target output: a register that accurately reflects your current AI tool landscape, with every active tool owned and every inactive tool correctly marked.
Section 2: Per-Tool Owner Reviews (45 minutes)
Each tool owner completes a brief review of their assigned tools. If a single person owns multiple tools, allow five to eight minutes per tool. For organisations with many tools, this section can be distributed across owners ahead of the review meeting and summarised by the facilitator.
For each active tool, the owner answers five questions:
Q1: Is the tool being used for its approved purpose? Review actual usage over the past 30 days. Look for signs that the tool is being used for tasks outside its approved scope — for example, a document summarisation tool being used to process employee performance data, or a customer support tool being used to draft internal communications that go directly to clients without human review. If scope drift is detected, log it as an anomaly.
Q2: Has the output quality been acceptable? Based on the override rate (the proportion of AI outputs that team members modified, rejected, or escalated): has output quality been within the acceptable range defined when the tool was approved? A sudden increase in overrides or rejections typically signals a model update, a data quality change, or a use case expanding beyond the tool's trained domain. Log and investigate if the override rate has changed materially.
Q3: Have any incidents occurred? An incident is any event where AI output caused or could have caused: a data breach, an inaccurate communication to a customer or supplier, a compliance violation, or a material business error. Log all incidents — including near-misses — with date, tool, description, and resolution. Incidents require escalation to the leadership team regardless of how they were resolved.
Q4: Is the vendor relationship current? Confirm the Data Processing Agreement is still in place and that there have been no notices from the vendor about changes to data processing terms, server location, pricing structure, or ownership. Vendor change notices sent by email are frequently missed — assign one team member to monitor vendor communications for each tool. Log any outstanding vendor notices as escalations.
Q5: Is the EU AI Act classification still accurate? Use case evolution can change an AI system's risk classification. A tool that was minimal-risk when first deployed may have been reconfigured or repurposed in ways that move it into a limited or high-risk category. If the use case has changed materially since the last classification, flag for re-classification before the next quarterly review.
Target output: a completed review record for each active tool, with anomalies, incidents, and escalations clearly logged.
Section 3: Escalation Triage (20 minutes)
Review all items flagged as anomalies, incidents, or escalations from Section 2. For each item, assign one of three statuses:
Resolved: The issue was identified and corrected during the review. Document the resolution and the corrective action taken. No further escalation needed.
Owner action required: The tool owner has a specific, time-bound corrective action to complete before the next review. Document the action and deadline. The owner confirms at the next monthly review whether it was completed.
Leadership escalation: The issue cannot be resolved at the tool owner level and requires a decision from the leadership team. These items are added to the quarterly review agenda. If the issue has immediate regulatory, financial, or reputational consequences, escalate immediately rather than waiting for the quarterly cycle.
Common escalation triggers: a potential GDPR breach involving AI-processed personal data; discovery that a tool has been processing data in a jurisdiction that violates your DPA; a vendor acquisition notice with unclear implications for data terms; or a high-risk AI system operating without the required human oversight mechanism in place.
Section 4: Review Record and Sign-off (10 minutes)
Complete the monthly review record. This is a single document (or database row, if your register is in Airtable or Notion) capturing:
- Review date
- Attendees / tool owners who completed reviews
- Register changes made (new tools, retirements, ownership changes)
- Anomalies logged (per tool)
- Incidents logged (per tool)
- Escalations raised (summary, owner, deadline or quarterly agenda)
- Completion status: complete or incomplete (with reason if incomplete)
The review record is your governance evidence trail. Under the EU AI Act, deployers of high-risk systems are required to maintain records of oversight activity. Even for non-high-risk systems, the review record is what you produce if a customer, auditor, or regulator asks how you monitor your AI tools.
Sign off on the record before closing the session. "Sign off" means the facilitator confirms all sections are complete and the record is saved in a location accessible to the leadership team.
What to Do When the Review Surfaces a Problem
The monthly review is designed to surface problems early, when they are still manageable. The most common problems and recommended responses:
Shadow tool discovered: Add to register, classify, assign owner. If the tool has been processing personal data without a DPA, treat as a potential GDPR incident and escalate immediately. If not, proceed through normal approval process.
Vendor sends unexpected terms change: Do not accept automatically. Review the change against your current DPA and data residency requirements. If the change affects data processing terms or location, escalate to the leadership team before the next review cycle.
AI output caused a customer-facing error: Treat as an incident. Document the error, the correction made, and whether the customer was affected. If the error involved personal data, assess whether a breach notification is required. Escalate to leadership regardless.
Tool is being used for a purpose not in the register: Assess whether the new use case changes the risk classification. If it does, suspend the expanded use until the new classification is confirmed and any additional compliance requirements are met.
For more detail on escalating shadow AI specifically, see the shadow AI escalation framework for European SMEs.
Frequently Asked Questions
How many people need to be involved in the monthly review?
At minimum, one facilitator and each tool owner. In small SMEs (under 15 people), this is often the same two or three people. The review does not need to be a synchronous meeting — tool owners can complete their Section 2 reviews independently and submit to the facilitator, who runs Sections 1, 3, and 4. The synchronous format is preferable if there are active escalations that need discussion.
What if a tool owner skips their review?
The facilitator marks that tool's review as incomplete in the monthly record and follows up before the quarterly review. A pattern of missed reviews (two or more consecutive months) should be escalated to the leadership team — it may indicate that the tool ownership assignment is not viable for that person's workload, or that the review is perceived as burdensome enough to skip. Both are governance problems worth addressing.
Can this review be combined with another recurring management meeting?
Yes, with conditions. Combining the AI governance review with a standing operations or leadership meeting works if: (a) it is a fixed agenda item with a defined time slot, (b) it is not displaced when other agenda items run over, and (c) the review record is still completed as a separate document. Governance that is discussed in a meeting but never documented provides no audit trail.
Does the monthly review need to cover AI features embedded in software we already use?
Yes. AI features embedded in productivity tools (Teams Copilot, Slack AI, Notion AI, CRM AI assistants) carry the same data processing implications as standalone AI tools. Many SMEs have inadvertently enabled AI features in existing software without completing DPA review or classifying the risk. These should be in the register and included in the monthly review on the same terms as any other tool.
Further Reading
- AI Governance Framework for European SMEs 2026 — the full framework this review cadence sits within
- AI Use Policy Template for European Employees — the policy document tool owners enforce during their monthly reviews
- AI Compliance Monitoring Checklist for European SMEs — the quarterly checklist that complements this monthly review
- Shadow AI Escalation Framework for European SMEs — decision path when shadow tools surface during Section 1
Want a facilitated first review? Book a free consultation and we will run your first monthly governance review with your team.

