Skip to main content

Command Palette

Search for a command to run...

How to Set Up a Claude Code Pilot in a Regulated European Company

How to run a Claude Code pilot in a regulated European company. GDPR, EU AI Act, and compliance guardrails for finance, healthcare, and legal.

Updated
9 min read
How to Set Up a Claude Code Pilot in a Regulated European Company

TL;DR: How to run a Claude Code pilot in a regulated European company. GDPR, EU AI Act, and compliance guardrails for finance, healthcare, and legal.

Running an AI coding tool pilot in a regulated European company is a different exercise than deploying it at a startup. A 30-person fintech in Amsterdam, a legal tech firm in Frankfurt, or a healthcare software company in Barcelona faces the same underlying tool decision, but with three additional layers: GDPR data processing obligations, EU AI Act classification requirements active since January 2026, and sector-specific rules (MiFID II, MDR, legal professional privilege) that generic AI guidance does not address. This guide gives operations leaders, CTOs, and engineering managers a structured path from first evaluation to controlled production use without triggering compliance gaps.

Why the Standard Pilot Approach Breaks in Regulated Contexts

Most Claude Code pilot guides assume a startup context: install the CLI, point it at a codebase, measure velocity. That approach exposes regulated companies to three risks that appear only after the pilot is already running.

Data residency uncertainty. Claude Code sends prompts to Anthropic's API. If a developer shares a file containing customer identifiers, financial transaction IDs, or health record fields, that data leaves the EEA unless Anthropic's DPA explicitly covers EEA-to-EEA processing or a valid GDPR transfer mechanism is documented. Many mid-sized companies launch pilots before confirming this.

AI Act classification gap. The EU AI Act's high-risk category includes AI systems used in hiring, access to essential services, and critical infrastructure. A growing software team at a bank or insurance company may be using Claude Code to build systems that fall into these categories. The obligation to document AI system purpose and establish human oversight is on the deployer, not on Anthropic.

Audit trail absence. Regulated companies in financial services, healthcare, and legal sectors operate under change-management requirements. Code that was generated by an AI system without a traceable review trail creates audit exposure. A pilot without governance design bakes in technical debt from day one.

Step 1: Determine Scope Before Touching Any Code

Before installing Claude Code on a single machine, answer these four questions in writing:

Which codebases will be in scope? List the repositories by name. For each, note whether any data class handled by that codebase is personal data, financial data, health data, or legally privileged information. Codebases that process personal data require a Data Processing Impact Assessment update to cover the new AI processor.

Who are the pilot participants? Name the engineers and their roles. Regulated companies typically need to document who has access to AI tools as part of change management. A pilot of 3-5 engineers in a controlled group is easier to manage than a department-wide rollout.

What is the approval chain for generated code? Agree before the pilot starts that no AI-generated code merges to main without a human review gate. Document this in the pilot charter. The review gate is both a compliance control and a quality control.

What counts as success at 30 days? Define measurable outcomes. Velocity improvement in test coverage, reduction in review cycle time, or developer satisfaction scores are all reasonable. Vague outcome definitions make it impossible to decide whether to extend the pilot.

For organisations that have not yet done a broader AI readiness evaluation, the AI readiness assessment provides a structured starting point before committing to any specific tool.

Step 2: GDPR Compliance Checklist for Claude Code

For a 20-person company or larger mid-sized firm using Claude Code, work through each of these items before going live:

Data Processing Agreement. Anthropic's commercial terms include a DPA. Confirm it covers your intended use case. Key questions: Does it specify EEA data processing options? Does it cover processing of personal data in code comments, variable names, or inline documentation? Download and sign the DPA through Anthropic's official agreement process, not a verbal understanding.

Records of Processing Activities (RoPA) update. GDPR Article 30 requires organisations with more than 250 employees, or any organisation processing personal data as a core activity, to maintain a RoPA. Add "AI-assisted code generation via Claude Code" as a processing activity. Record: purpose (code development), categories of data (code containing business logic, possibly referencing personal data categories), processor (Anthropic), legal basis (legitimate interests or contract), and safeguards (DPA, code review gate).

Developer guidance document. Write a one-page internal guide for pilot participants. It should cover: what types of data must never be pasted into a Claude Code session, how to reference sensitive data by placeholder variables instead of real values, and what to do if a session output contains data that should not have been shared.

Sector layer. For financial services: verify that Claude Code use in systems that affect customer credit, eligibility, or access decisions is documented under your MiFID II or DORA change management process. For healthcare: check whether the software being developed is a medical device under MDR, which would require additional AI Act high-risk documentation. For legal: confirm with the firm's data protection officer that sharing code related to client matters does not create privilege concerns.

Step 3: CLAUDE.md Configuration as a Compliance Control

The CLAUDE.md project configuration file is not just a developer convenience. For regulated companies, it is a compliance control. The instructions in this file constrain what Claude Code is permitted to do within a session.

For a regulated European company, a production-grade CLAUDE.md should explicitly state:

  • The data classification of the codebase (e.g., "This codebase processes financial transaction data. Do not use real transaction IDs, account numbers, or customer identifiers in examples, tests, or inline comments.")
  • Which operations require human approval before execution (database migrations, infrastructure changes, dependency version bumps)
  • Which files are off-limits for AI-assisted editing (environment config files, secrets management files, GDPR consent flow logic if it is particularly sensitive)
  • The company's agreed code review policy for AI-generated code

Committing this file to the repository creates an auditable record that the team operated under documented AI governance from pilot day one.

For the full guide to CLAUDE.md configuration, see the CLAUDE.md configuration guide for engineering teams.

Step 4: EU AI Act Classification for the Pilot

The EU AI Act requires companies that deploy AI systems to classify those systems by risk. Claude Code itself is a general-purpose AI tool, not a high-risk system. But the software you build with Claude Code may be high-risk if it performs one of the Act's listed functions: credit scoring, employment and workers management, access to essential private or public services, education and vocational training assessment.

For regulated companies, the practical question is whether any sprint in the pilot period will involve building features that fall into high-risk categories. If yes, document that at the sprint planning stage, not retrospectively. The documentation obligation under the Act is on the deployer of the resulting AI-enabled system, and traceability to the code generation tool is part of the record.

The AI governance framework for European SMEs provides a full classification worksheet that can be completed before the pilot begins.

Step 5: Running the Pilot: Governance Checkpoints

Structure the pilot in three two-week phases:

Weeks 1-2: Configuration and calibration. Engineers configure their environments, commit CLAUDE.md files, and run Claude Code on non-production tasks (documentation, test scaffolding, code explanation). No AI-generated code merges to main. Purpose: build familiarity and identify edge cases specific to the codebase.

Weeks 3-4: Supervised production use. AI-generated code is permitted in PRs, with explicit labelling (e.g., a [AI-assisted] tag in the PR title). All PRs go through the standard review process. Track: how many AI-assisted PRs passed review unchanged vs. required significant rework.

Weeks 5-6: Evaluation and decision. Review the measurable outcomes defined in Step 1. Convene the review with engineering, legal or compliance, and a management sponsor. Options: extend to full team, narrow to specific use cases, or decline based on the data.

A Note on Professional Services and Founder-Led Companies

For founder-led software companies and professional services firms, the compliance overhead can feel disproportionate to a 3-person pilot. It is not. A growing software team that builds compliance controls into the pilot is building a governance foundation that will be required for any future regulated client engagement, ISO 27001 audit, or EU AI Act conformity declaration. The small cost of a structured pilot is insurance against much larger remediation costs later.

For organisations wanting a concrete compliance roadmap alongside the technical rollout, an AI consulting engagement can compress the governance design from weeks to days.

FAQ

Does GDPR apply if engineers only share non-personal code with Claude Code?

GDPR applies to processing of personal data. If the code shared contains personal data (customer names in comments, real account numbers in test fixtures, health data in database schema documentation), GDPR obligations apply. Code that contains no personal data in any form is outside GDPR scope, but in practice this is hard to guarantee without an explicit review process.

Is Claude Code classified as high-risk under the EU AI Act?

Claude Code itself is a general-purpose AI system, not a high-risk AI system under Annex III of the Act. However, AI systems built using Claude Code that perform high-risk functions (credit scoring, hiring, healthcare decisions) carry their own obligations. The obligation is on the company deploying the resulting system.

Can a 20-person company implement all of these steps without a dedicated compliance team?

Yes. Steps 1 and 3 (scope definition and CLAUDE.md configuration) can be completed by a CTO or engineering manager in a few hours. Step 2 (GDPR checklist) requires reviewing Anthropic's DPA and updating your RoPA, which legal counsel can do in a half-day. The EU AI Act classification in Step 4 is a 30-minute exercise for most engineering pilots that are not touching high-risk use cases.

Further Reading