EU AI Act Enforcement Is Active: What Q1 2026 Brought and What to Check Now
EU AI Act enforcement began January 2026. Here is what happened in Q1 and a 10-point checklist for European SMEs to verify now.
TL;DR: EU AI Act enforcement began January 2026. Here is what happened in Q1 and a 10-point checklist for European SMEs to verify now.
The EU AI Act moved from policy document to enforcement reality on 2 February 2026. For most European SME founders and compliance managers, the question is no longer "what does this law say" but "what do I need to verify this week." Why this matters: national market surveillance authorities have begun their first formal compliance reviews, and the 6-month grace period for existing AI systems in high-risk categories closes in August 2026. A growing software team or professional services firm that has been treating this as a future concern now has a concrete deadline with concrete consequences. This article covers what actually happened in Q1 2026, which obligations are active now, and a 10-point checklist you can walk through before the grace period closes. This is not a general EU AI Act overview. There are already numerous articles covering the basics. This covers the enforcement phase specifically.
One reference number to keep in mind: fines for violations of high-risk AI system obligations can reach 3% of global annual turnover, with a minimum floor of EUR 15 million for larger organisations.
What Happened in Q1 2026: The Enforcement Landscape
The first quarter of 2026 established several important precedents for how enforcement is unfolding in practice.
National authorities activated their oversight structures. By March 2026, France (CNIL leading AI Act coordination), Germany (national AI authority under the BNetzA umbrella), and the Netherlands (Autoriteit Persoonsgegevens with extended AI mandate) had all issued their first compliance guidance documents for businesses operating in their jurisdictions. These documents clarified which sectors were receiving initial scrutiny attention.
Priority sectors for early compliance review. Q1 enforcement attention concentrated in three areas: HR and recruitment software (specifically automated CV screening and candidate scoring tools), credit and insurance underwriting tools used by financial services providers, and AI systems used in education assessment. SMEs using off-the-shelf tools in these categories were included in scope, not just the software vendors. This is the key point many small business owners missed: if you use a third-party AI tool for recruitment or credit decisions, you carry compliance obligations alongside the vendor.
The prohibited practices ban took effect in February. Article 5 prohibitions came into force on 2 February 2026. These cover social scoring systems, real-time biometric surveillance in public spaces (with narrow law enforcement exceptions), and AI systems that exploit psychological vulnerabilities to influence behaviour. No enforcement actions against SMEs were publicly confirmed in Q1 on these grounds, but several large platform operators received formal inquiries.
High-risk obligations timeline confirmed. The compliance obligations for high-risk AI systems under Article 6 and Annex III are on a phased schedule. For systems placed on the market after the Act's entry into force, obligations are immediate. For existing systems already in use, the grace period runs until August 2026 for most categories. Embedded AI systems (AI built into machinery covered by other product regulations) have until 2027.
What SMEs told regulators they were confused about. Several national business associations published Q1 surveys of their members. The most common confusion points were: whether using a third-party AI product (as opposed to building one) creates obligations; whether internal-only AI tools are in scope; and how to classify a system that performs multiple functions, some of which might be high-risk and some not.
The short answers: yes, deployers carry obligations not just providers; internal tools are in scope if they affect people's rights or access to services; and a mixed-function system is classified by its highest-risk component.
What Article 6 High-Risk Classification Means in Practice
Article 6 is the classification mechanism. It routes AI systems into the high-risk tier based on two pathways.
Pathway 1 covers AI systems that are themselves safety components of products regulated by existing EU law (machinery, medical devices, vehicles). If your AI is embedded in a regulated product, it is high-risk by definition.
Pathway 2 covers the Annex III list: employment, education, access to essential services, law enforcement, migration, and certain justice and democratic process applications. For European SME operators, the most practically relevant Annex III categories are:
- Recruitment and employment management tools that make or substantially influence hiring, promotion, or performance assessment decisions
- Access to credit and insurance (scoring and pricing tools)
- Access to education and vocational training (assessment of students and candidates)
If your organisation uses AI tools that fit these descriptions, even as a deployer of a third-party product, you are subject to the high-risk compliance obligations listed below.
The "substantially influences" language is important. A tool that produces a ranked list of candidates which a human manager then uses to make a hiring decision has been interpreted by legal experts as substantially influencing that decision. Do not assume that having a human in the final approval step removes your obligations.
For the governance framework that should sit behind these compliance obligations, see the AI Governance Framework for European SMEs.
The 10-Point Compliance Checklist for European SMEs
This checklist covers the obligations that are either already active or closing before August 2026. Walk through it before the grace period ends.
1. Inventory your AI tools. List every AI system your organisation uses, including tools embedded in software you already pay for (CRM AI features, HR platform AI, finance tool automation). You cannot classify what you have not listed.
2. Classify each tool. For each tool on your list, determine whether it falls into an Annex III category. When in doubt, assume high-risk and verify. The cost of a classification assessment is lower than the cost of an enforcement inquiry.
3. Check your vendor agreements. For third-party AI tools in high-risk categories, your vendor should provide technical documentation and conformity information. If they cannot, that is a risk signal. Update your procurement process to require this for future contracts.
4. Verify human oversight mechanisms. For every high-risk system, document the human oversight process. Who reviews outputs before they affect a person? How does a person contest an AI-influenced decision? These processes must exist and be documented, not just implied.
5. Check your transparency notices. If any AI system your organisation uses interacts with people or affects their access to services, those people need to know. Review your customer communications, employee policies, and applicant-facing processes for AI disclosure statements.
6. Update your GDPR records. AI systems that process personal data for automated decisions require entries in your GDPR records of processing activities. If your AI inventory from step 1 reveals tools not listed there, update your records now.
7. Assess your high-risk logging setup. High-risk AI systems must maintain logs sufficient to trace their operation. Check whether your vendor provides this, or whether you need to implement it at the deployer level.
8. Document your bias and accuracy monitoring. High-risk systems require ongoing performance monitoring. If you are using an employment or credit AI tool, what is your process for detecting and responding to evidence of bias or systematic error?
9. Assign accountability. Every high-risk AI system needs a named person responsible for its compliance. This does not require a full-time AI compliance officer in a founder-led company, but it does require a designated owner with authority to pause the system if something goes wrong.
10. Set your August 2026 review date. If you have existing high-risk systems covered by the grace period, put the compliance review date in your calendar now. The grace period closes, and the obligations become immediately enforceable from that date.
For existing monitoring infrastructure, the AI Compliance Monitoring Checklist for European SMEs and the Monthly AI Governance Review Template for SMEs give you repeatable processes to maintain compliance posture after the initial setup.
Which SME Use Cases Attracted Early Compliance Attention
Based on Q1 regulatory guidance and published inquiry summaries, three SME use case patterns appeared most often in early compliance discussions.
Automated CV screening in recruitment. Several HR software products used by small and mid-sized companies include AI ranking and filtering features that default to "on." Many operators were unaware these features were active. If your HR platform includes AI screening, verify whether it is in use and classify it accordingly.
AI-powered credit limit decisions in B2B contexts. Some accounts receivable and trade credit platforms include AI-driven credit limit assignment. Operators using these tools as deployers carry obligations even though they are not the software vendor.
AI content moderation affecting access to platforms. For mid-sized companies running online platforms or communities, AI moderation tools that can restrict user access may fall under the access-to-services category in Annex III.
For sector-specific compliance context, see the AI Governance in Financial Services for European SMEs article, which covers the financial services obligations in more detail.
What the Fractional CTO or External Adviser Angle Looks Like
Many European SMEs do not have an in-house legal or compliance team with AI expertise. The Q1 enforcement picture suggests that regulators are not expecting small business owners to be AI law experts. They are expecting that operators made a reasonable effort to understand their obligations and acted on that understanding.
A structured one-day compliance review with an external AI adviser, followed by documented decisions on each tool in your inventory, is likely sufficient to demonstrate that reasonable effort. The Fractional AI Governance Consultant vs In-House AI Lead piece covers the build-vs-buy decision for ongoing compliance capacity.
If you discover an incident or near-miss while doing this review, the AI Incident Response Playbook for European SMEs covers what to do next, including notification obligations.
Frequently Asked Questions
Does the EU AI Act apply if I only use off-the-shelf AI tools and do not build anything?
Yes. The Act distinguishes between providers (who build and place AI systems on the market) and deployers (who use AI systems in their business operations). Deployers of high-risk AI systems carry their own set of obligations, separate from those of the provider. Using a third-party tool does not transfer compliance responsibility to the vendor.
What happens if I miss the August 2026 grace period deadline?
The grace period is a practical accommodation for existing systems. After it closes, national market surveillance authorities can initiate enforcement proceedings against non-compliant high-risk AI systems without any additional notice period. Fines for high-risk system violations are set at up to 3% of global annual turnover. The grace period exists to give operators time to comply, not to defer compliance indefinitely.
How do I know if a system "substantially influences" a decision?
This is an active area of regulatory interpretation, but the working test used in Q1 guidance documents is this: if a human decision-maker would materially change their decision without the AI output, the system substantially influences the decision. A ranked list, a score, a recommendation, or a flag all qualify. Disclosure of information without ranking or recommendation is less likely to qualify.
Further Reading
- AI Governance Framework for European SMEs: The structural governance layer that sits behind your EU AI Act compliance obligations.
- AI Compliance Monitoring Checklist for European SMEs: Repeatable monitoring process for ongoing compliance after your initial review.
- AI Governance in Financial Services for European SMEs: Sector-specific obligations for SMEs in financial services, credit, and insurance.
- Monthly AI Governance Review Template for SMEs: Keep your compliance posture current with a structured monthly cadence.
Ready to work through your EU AI Act compliance posture with a specialist? Visit First AI Movers AI Consulting to start the conversation.

