How to Write an AI Use Policy European Employees Will Actually Follow
A practical AI use policy template for European SMEs. Covers approved tools, data handling rules, employee obligations, and EU AI Act transparency require…
TL;DR: A practical AI use policy template for European SMEs. Covers approved tools, data handling rules, employee obligations, and EU AI Act transparency require…
Most AI use policies fail not because they are wrong but because they are written for lawyers, not for the people who will live with them. They describe prohibitions without explaining the reasoning, use regulatory language that sounds abstract in a 25-person logistics firm, and provide no practical guidance on the questions employees actually encounter: can I use ChatGPT to draft this customer email? What happens if I use a tool that is not on the approved list?
This guide gives you a policy structure that is compliant with EU requirements, written in plain language, and specific enough that a new hire on their second day can understand what is expected of them. The template sections below are ready to adapt — replace the bracketed placeholders with your organisation's specifics and review with your legal counsel if you deploy any high-risk AI systems.
What Your Policy Must Cover Under EU Law
The EU AI Act and GDPR together create a framework of obligations that an SME AI use policy needs to address, even if the policy itself does not reference the regulations by name. The practical requirements are:
Transparency to customers and third parties. Under EU AI Act Article 50, organisations deploying AI that generates content, interacts with users, or produces outputs that could be mistaken for human work must disclose the AI origin in certain contexts. Your policy needs to tell employees when and how to make that disclosure.
Personal data protection. Under GDPR, employees processing personal data through AI tools are acting on behalf of your organisation. If they input personal data into a tool without a valid Data Processing Agreement in place, the organisation bears the compliance exposure — not the employee personally. Your policy must define which data categories may and may not be processed through AI tools, regardless of which tool.
Human oversight of consequential decisions. The EU AI Act requires human oversight for high-risk AI systems. Even for lower-risk tools, good governance requires that consequential decisions — ones that affect customers, staff, suppliers, or the business materially — include a human review step before acting on AI output. Your policy should establish this as a default expectation, not just a requirement for regulated use cases.
The Policy Structure
Section 1: Purpose and Scope
State clearly what the policy covers, who it applies to, and why it exists. Avoid legalistic framing. A sentence like "This policy helps [Company Name] use AI tools effectively while protecting our customers, staff, and business from the risks that come with getting it wrong" is more useful than "This policy establishes the regulatory compliance framework for artificial intelligence use."
Scope: The policy applies to all employees, contractors, and agency staff. It covers all AI tools used in connection with [Company Name] work, on any device, regardless of whether the tool was purchased by the company or used via a personal account.
Purpose: To ensure AI tools are used in ways that protect customer data, meet our legal obligations under EU law, and deliver consistent quality in our work.
Section 2: Approved Tools and the Request Process
List your approved AI tools explicitly. If your AI register is the authoritative source, reference it directly ("See the current AI Register at [link]"). For each approved tool, state the approved use case — not every tool is approved for every purpose.
Example structure:
| Tool | Approved uses | Not approved for |
| [Tool A] | Drafting internal documents, summarising meeting notes | Processing customer personal data, customer-facing communications without human review |
| [Tool B] | Code generation and review | Processing financial records or employee data |
| [Tool C] | Customer support response drafts | Final sending without human review and approval |
To request a new tool: Submit a request to [designated owner / email] using the AI tool request form at [link]. Requests will be reviewed within [X business days]. Do not begin using a tool for business purposes until approval is confirmed.
Using unapproved tools: Using an unapproved AI tool for work purposes — including via a personal account — is a policy violation. If you discover you have already used an unapproved tool, disclose this to [designated owner] within [X days]. The [amnesty process / grace period] applies to disclosures made within [X days] of this policy taking effect.
Section 3: Data Handling Rules
This section is the most critical for GDPR compliance. State explicitly which data categories may and may not be processed through AI tools, and explain the reasoning briefly so employees understand why the rule exists.
Data you may process through approved AI tools:
- Non-personal business information (market research, internal process documentation, technical content)
- Anonymised or aggregated data with no personally identifiable information
- Publicly available information (news, published reports, open datasets)
Data you must not process through any AI tool without explicit approval from [designated owner]:
- Customer personal data (names, email addresses, contact details, purchase history, account information)
- Employee personal data (performance data, payroll, health information, disciplinary records)
- Financial records containing individual account or transaction data
- Legal documents containing privileged or confidential information
- Any data subject to a specific confidentiality obligation
Why this matters: When you enter personal data into an AI tool, the data may be stored, used for model training, or processed on servers outside the EU — even if the tool looks like a simple web interface. Without a Data Processing Agreement in place between [Company Name] and the tool vendor, that processing is a GDPR violation that the company bears responsibility for. The approved tool list only includes tools where this agreement is in place.
Section 4: AI-Generated Content and Transparency
Tell employees what they are expected to do when AI generates content that will reach customers, suppliers, or the public.
Human review requirement: Any AI-generated output that will be sent to a customer, published publicly, or used as the basis for a consequential business decision must be reviewed and approved by a named human before use. "Approved" means the reviewer has read the output, verified its accuracy, and taken responsibility for it. Sending AI-generated content without review is not permitted.
Disclosure requirements: Disclose the AI origin of content in the following circumstances:
- Customer-facing chatbot or automated response systems — include a disclosure in the interface or initial message
- Marketing content where the AI origin is material to the recipient's assessment (check with [designated owner] if uncertain)
- Any context where a customer or business partner has explicitly asked whether content was human-generated
Quality and accuracy: AI tools produce plausible-sounding output that is sometimes incorrect. Factual claims, figures, citations, and legal or regulatory statements in AI-generated content must be independently verified before use. Do not rely on AI output as a source of truth for matters where accuracy is consequential.
Section 5: Incidents and Escalation
Tell employees what counts as an incident, how to report it, and what happens when they do.
An AI-related incident is any event where:
- Personal data was entered into an unapproved tool or a tool without a current DPA
- AI output caused or could have caused an error in a customer communication, financial record, or consequential business decision
- A customer, supplier, or third party raised a concern about AI-generated content
- An AI tool behaved unexpectedly — producing outputs that were harmful, discriminatory, or factually dangerous
To report an incident: Contact [designated owner / email] within [X hours/days]. Provide: date, tool used, what happened, whether any personal data was involved, and whether any action has already been taken. You will not be penalised for reporting in good faith.
What happens after a report: [Designated owner] will assess whether a GDPR breach notification is required (mandatory for personal data breaches within 72 hours to the supervisory authority). All incidents are logged and reviewed in the monthly governance review.
Section 6: Employee Responsibilities Summary
Close with a one-page checklist employees can refer to:
- [ ] I only use AI tools on the approved list for business purposes
- [ ] I do not input customer or employee personal data into AI tools without explicit approval
- [ ] I review all AI-generated content before sending it externally or using it for a business decision
- [ ] I disclose AI origin when required
- [ ] I report incidents or suspected policy violations promptly
- [ ] I complete the required AI tool training [link/date] and renew annually
Rolling Out the Policy
A policy that is communicated once and never reinforced has a half-life of about three months. Effective policy rollout requires:
Launch briefing: A 30-minute session explaining not just what the policy requires but why — connect it to real scenarios your team already encounters. "What do you do if a customer asks whether this email was written by AI?" is more memorable than citing Article 50.
Acknowledgement: Have employees sign or digitally acknowledge receipt. This creates an evidence trail and signals that the policy has real weight.
Periodic refresh: Review the approved tool list at least quarterly as tools change. Communicate changes clearly — "we have added [Tool] to the approved list for [use case]" rather than silently updating the register.
New hire onboarding: Include the AI use policy in new hire onboarding. The training should cover the policy plus a practical session on which tools to use for common tasks.
For the governance structure that sits around this policy, see the AI governance framework for European SMEs.
Frequently Asked Questions
Do we need a lawyer to review this policy?
You should have legal counsel review it before finalising, particularly if you deploy any high-risk AI systems or if your business handles sensitive data categories (health data, financial data, legal data). The structure in this guide is designed to meet the practical requirements of EU AI Act and GDPR, but a legal review ensures it is accurate for your specific business context and jurisdiction.
Can employees use personal AI tool subscriptions for work?
Your policy should address this explicitly. The recommended position: personal subscriptions are not approved tools. A ChatGPT Plus account held by an individual employee has no DPA with your organisation — processing company data through it is a policy violation regardless of who is paying for the subscription. If an employee wants to use a personal subscription, route the tool through the normal approval process.
What if an employee refuses to follow the policy?
Treat it as a standard conduct issue. The AI use policy is a workplace policy like any other — non-compliance can be addressed through your normal performance and conduct management process. If the non-compliance involves a GDPR breach (for example, the employee processed customer personal data through an unapproved tool), assess whether the breach notification obligation is triggered regardless of the internal conduct question.
How detailed does the approved tool list need to be?
Detailed enough that a new employee can determine within 60 seconds whether a tool they want to use is approved and for what purpose. A one-line entry per tool ("Approved: drafting internal documents only") is sufficient. Complexity increases confusion — if the list requires interpretation, employees will either default to asking for approval (which slows them down) or default to proceeding without approval (which creates risk). Clarity is the primary design goal.
Further Reading
- AI Governance Framework for European SMEs 2026 — the governance structure this policy sits within
- Monthly AI Governance Review Template for SMEs — the recurring process that keeps this policy live
- Shadow AI Escalation Framework for European SMEs — what to do when unapproved tool use is discovered
- AI Tool Selection Scorecard for European SMEs — structured process for evaluating tools before adding them to the approved list
Need help drafting your AI use policy? Book a free consultation to get an expert review of your draft and recommended additions for your industry.

