Skip to main content

Command Palette

Search for a command to run...

Why Amsterdam Accounting Firms Must Build AI Governance Before Anything Else

Updated
10 min read
Why Amsterdam Accounting Firms Must Build AI Governance Before Anything Else

TL;DR: Amsterdam accounting firms face DNB, AFM, and EU AI Act pressure in 2026. Learn how to build a compliant AI strategy that prioritises governance over spee…

Amsterdam's accounting and financial advisory sector is under compounding regulatory pressure in 2026. Boutique firms, international tax practices, and compliance-focused advisories that serve mid-market clients are caught between the productivity promise of AI and an increasingly demanding supervisory environment. De Nederlandsche Bank (DNB) issued model risk guidance that explicitly addresses algorithmic tools in financial services. The AFM has signalled that AI-assisted client advice will be scrutinised under existing suitability and conduct rules. And since January 2026, the EU AI Act's high-risk classification for AI systems used in creditworthiness assessment and financial advisory has moved from policy paper to enforceable reality.

The firms arriving at AI consulting conversations with questions about "which tool is fastest" are asking the wrong question. The right question — and the one that separates firms that will scale AI responsibly from those that face regulatory exposure — is: what governance infrastructure do we need before we deploy anything?


The Regulatory Stack Amsterdam Accounting Firms Are Actually Navigating

Most AI consultants arriving from outside financial services underestimate how layered the Dutch regulatory environment is for accounting and advisory firms. Understanding the stack is not optional — it determines which AI use cases are permissible, under what conditions, and with what documentation obligations.

DNB's expectations on model risk management apply to any institution using models that influence decisions affecting clients or counterparties. For accounting firms offering advisory services, this means AI tools used to flag tax risks, benchmark valuations, or summarise regulatory exposure are not neutral productivity tools — they are models, and they carry model risk obligations. DNB expects model validation, performance monitoring, and clear ownership of model outputs.

The AFM's conduct supervision framework adds a client-facing layer. Any AI output that informs advice given to a client must be traceable, explainable, and consistent with the firm's documented advisory methodology. Firms cannot use "the AI suggested it" as a defence in an AFM review. The obligation to act in the client's interest — and to document how you did so — sits with the human adviser, regardless of how the recommendation was generated.

On top of these existing frameworks, the EU AI Act introduced enforceable high-risk obligations for AI systems used in financial advisory contexts from January 2026. Firms deploying AI tools that assist in creditworthiness assessment, tax risk scoring, or investment suitability analysis must maintain conformity documentation, implement human oversight mechanisms, and register certain systems with national authorities. For a 15-person accounting firm that simply signed up for an AI productivity suite, these obligations can arrive without warning.


The Three AI Use Cases That Are Already Live — and Already Risky

Before any formal AI strategy is in place, most Amsterdam accounting firms have already adopted AI in at least three areas. Each carries specific governance gaps.

Document summarisation and review. Tools that summarise client contracts, annual reports, or regulatory filings are in widespread use. The risk is not the summarisation itself — it is that output is being incorporated into client-facing advice without a documented review step. When a junior associate pastes an AI summary into a client memo, the chain of professional responsibility becomes unclear. Firms need a documented human-in-the-loop process for any AI output that enters client deliverables.

Tax research and case law retrieval. AI-assisted research tools speed up case law analysis and cross-border tax research significantly. The governance gap here is citation reliability. AI systems hallucinate references with high confidence. Without a verification protocol, incorrect citations can reach client reports. The fix is procedural — a mandatory verification step before any AI-sourced legal or regulatory reference is used in client work — but it must be written into the firm's quality management system, not just communicated informally.

Client communication drafting. Drafting engagement letters, client updates, and advisory summaries with AI assistance is common. The AFM risk here is that personalised advice language is being generated by a system that has no knowledge of the specific client's situation beyond what was entered into the prompt. Firms should audit which communication types are being AI-assisted and ensure that genuinely personalised advice — where the firm's professional judgement is the differentiating value — is not being reduced to a prompt output.


What an AI Governance Framework Looks Like for a Boutique Amsterdam Firm

Governance does not require a dedicated compliance team or a six-figure technology investment. For a firm of 10 to 30 professionals, a proportionate AI governance framework has four components.

An AI use case register. A documented list of every AI tool in use, the tasks it is applied to, and the data it accesses. This is the baseline for any regulatory conversation and the starting point for EU AI Act conformity assessment. It takes one working day to build and requires quarterly review.

A data classification and access policy. Client data — financial statements, tax returns, beneficial ownership information — is categorised as confidential and must never be entered into AI tools that train on user inputs or lack contractual data processing agreements. Firms need a clear approved-tools list and a prohibition on unapproved tools for client-related work.

Human review checkpoints. For every workflow where AI output influences client-facing work, a named human reviewer is responsible for verifying accuracy before the output leaves the firm. This is not an administrative burden — it is the minimum standard required to maintain professional indemnity coverage and satisfy AFM conduct expectations.

An incident and escalation procedure. If an AI tool produces an output that is used in client work and later found to be incorrect, the firm needs a defined process for assessing the impact, notifying the client where appropriate, and documenting the failure for regulatory purposes. Firms without this procedure are exposed in the event of a PI claim or supervisory inquiry.


Selecting AI Tools Under Dutch and EU Regulatory Constraints

The EU AI Act's high-risk classification creates a hard line for tool selection. AI systems that assist in making or informing decisions about creditworthiness, investment suitability, or tax risk exposure fall into the high-risk category when used in a professional advisory context. Providers of such systems have obligations — but so do the firms deploying them.

Before deploying any AI tool in a client-advisory workflow, Amsterdam accounting firms should conduct a proportionate conformity check. This means reviewing the provider's documentation on training data, model limitations, and human oversight mechanisms. It means confirming that the provider has a GDPR-compliant data processing agreement in place. And it means assessing whether the use case, as the firm intends to apply it, triggers high-risk classification under the Act.

General-purpose productivity tools — email drafting, meeting summaries, internal knowledge retrieval — sit outside the high-risk perimeter when used for internal workflows that do not directly inform client advice. This distinction matters operationally: firms can move quickly on internal productivity use cases while taking a more deliberate approach to client-facing AI applications.

For an objective framework for evaluating any AI tool against these criteria, the AI Tool Selection Scorecard for European SMEs provides a structured approach that accounts for sector-specific regulatory constraints.


Frequently Asked Questions

Does the EU AI Act apply to our accounting firm if we are just using an off-the-shelf AI tool?

Yes, in specific circumstances. Under the EU AI Act, firms that deploy AI systems — including off-the-shelf tools — in contexts that trigger the high-risk classification are considered deployers and carry compliance obligations. If your firm uses an AI tool to assist in producing advice that influences a client's financial decisions (tax planning, investment strategy, compliance risk assessment), the deployer obligations apply. This includes maintaining technical documentation, implementing human oversight, and monitoring system performance. The obligation does not sit with the software vendor alone.

What does DNB expect from accounting and advisory firms using AI models?

DNB's model risk management expectations, derived from its broader supervisory framework for institutions using quantitative models, require that any model influencing decisions be subject to validation before deployment, ongoing performance monitoring, and clear accountability for model outputs. For accounting firms, this most commonly applies to AI tools used in valuations, risk scoring, or scenario analysis. DNB expects proportionality — smaller firms are not held to the same standard as large banks — but the expectation of documented governance applies regardless of firm size.

How do we handle client data when testing AI tools?

Client data should never be used to test or evaluate AI tools unless the tool is already on the firm's approved list and covered by a signed data processing agreement. For testing purposes, use anonymised or synthetic data. This is both a GDPR obligation and a professional conduct requirement. A data classification policy that categorises client financial information as confidential and prohibits its use in unapproved systems should be in place before any AI evaluation begins.

What is the first step for a firm that has no formal AI governance in place?

Start with an audit of what is already in use. Survey your team and document every AI tool being used, the tasks it is applied to, and whether client data is involved. This use case register gives you a baseline from which to assess regulatory exposure, prioritise governance measures, and communicate with supervisors if required. Firms that have conducted this audit before any regulatory inquiry are in a structurally stronger position than those who have not.


Further Reading


Ready to build a compliant AI strategy? Book a free consultation with our team.