Skip to main content

Command Palette

Search for a command to run...

EU AI Act guidance is late. Your AI inventory can’t be.

Updated
3 min read
EU AI Act guidance is late. Your AI inventory can’t be.
D
PhD in Computational Linguistics. I build the operating systems for responsible AI. Founder of First AI Movers, helping companies move from "experimentation" to "governance and scale." Writing about the intersection of code, policy (EU AI Act), and automation.

EU AI Act guidance is late. Your AI inventory can’t be.

TL;DR: Guidance for the EU AI Act is delayed, but your preparation for high-risk AI system registration can't wait. Learn our 14-day sprint plan.

A 14-day sprint for European SMEs to classify AI, build evidence, and get ready for the high-risk database.

A delay in official EU AI Act guidance doesn't grant a free pass; it raises your uncertainty cost. The clock is ticking on compliance, especially for high-risk AI system registration. You must make defensible decisions with incomplete information. If you cannot explain what AI you use, where it sits, and who owns it, you will face last-minute audits and reactive controls when enforcement timelines tighten.

The guidance delay doesn’t pause your obligations. It raises your uncertainty cost.

A delay in official guidance does not mean your organization gets a free pass. It means you have to make defensible decisions with incomplete information. If you cannot explain what AI you use, where it sits in critical processes, and who owns it, you will be forced into last-minute audits, vendor scrambling, and reactive controls when enforcement timelines tighten or change.

If you can’t name your AI systems, you can’t govern them.

Most SMEs already have ‘hidden AI’ embedded in SaaS: copilots, automated scoring, support automation, recruitment screening, fraud flags, and analytics. Start with an AI inventory that captures: system name, business owner, vendor/provider, purpose, inputs, outputs, human-in-the-loop steps, data categories (including personal data), and impact surface (customers, employees, financial decisions). This turns compliance from panic into project management, a core part of any effective AI Audit.

Minimum Viable Evidence Pack: what to document before you buy more AI.

For each AI system (including third-party tools), assemble a light evidence pack: (1) classification rationale (why it is or isn’t high-risk), (2) risk register with top failure modes, (3) controls and monitoring plan, (4) incident response path, (5) vendor artifacts you can actually obtain (model cards, security posture, DPA, audit logs, change notifications). This proactive documentation is a cornerstone of robust AI Governance & Risk Advisory. When guidance arrives, you update, not restart.

Framework: The 14-Day Sprint for High-Risk AI System Registration

Day 1–3 Discover: map every AI feature across your stack (SaaS, custom, spreadsheets, bots). Day 4–6 Classify: tag each system as likely high-risk, likely not, or unknown using Article 6 + Annex III logic. Day 7–9 Control: define access, approvals, and human oversight for high-impact workflows. Day 10–12 Evidence: build the Minimum Viable Evidence Pack. Day 13–14 Register-ready: define the fields you’ll need for the EU high-risk database and assign owners so registration is a checklist, not a fire drill.

Further Reading


Written by Dr Hernani Costa, Founder and CEO of First AI Movers. Providing AI Strategy & Execution for Tech Leaders since 2016.

Subscribe to First AI Movers for practical and measurable business strategies for Business Leaders. First AI Movers is part of Core Ventures.

Ready to increase your business revenue? Book a call today!

5 views