5 Signs Your AI Engagement Is Failing Before Day 1
5 Signs Your AI Engagement Is Failing Before Day 1
TL;DR: More than 80% of AI projects fail. Most show warning signs before any code is written. Here are five red flags to catch in the first two weeks of an AI co…
More than 80% of AI projects fail to deliver measurable business value. That number has been consistent across industry analyses for years, and it has not improved meaningfully despite the acceleration of AI tooling and the rise of generative AI. The Netherlands reflects this pattern: 95% adoption, but a fraction of companies creating real value from their AI investments.
The cause is rarely technical. Models work. APIs are stable. Cloud infrastructure is mature. The failure happens earlier -- in the scoping, the assessment, and the structure of the engagement itself. By the time the AI tool is being built or configured, the conditions for failure were set weeks ago.
For North Holland SMEs with 10 to 50 employees, these early-stage failures are especially costly. There is no slack in the budget, the team, or the timeline to absorb a misdirected engagement. Recognising the warning signs before they compound is the highest-leverage thing a business leader can do.
Sign 1: No Data Audit in the First Week
The single most reliable predictor of AI project failure is proceeding without understanding the state of the data the system will depend on. If your AI consultant has not requested access to your data systems or conducted a data audit within the first week of the engagement, something is structurally wrong.
This does not mean a full data engineering assessment needs to be complete in five days. It means the consultant should be asking: Where does your data live? What format is it in? How fresh is it? What is the data pipeline from source to the point where an AI system would consume it? Are there gaps, duplicates, or quality issues that would affect model performance?
For a 20-person logistics company in Haarlem or a 40-person professional services firm in Amstelveen, the data environment is almost never clean enough for AI out of the box. A consultant who does not investigate this immediately is either planning to discover it later (at your expense) or does not understand that data quality is the precondition for everything else.
Sign 2: No Process Mapping Before Solutioning
AI is a tool. It works on processes. If the consultant begins recommending AI solutions -- tools, platforms, models, architectures -- before mapping the business processes those solutions are supposed to improve, the engagement has the sequencing backwards.
Process mapping means identifying which workflows are candidates for AI, documenting how those workflows actually operate today (not how leadership thinks they operate), and determining which ones are stable enough to automate and which need to be fixed first.
The most common version of this failure: the consultant recommends an AI-powered customer service chatbot before examining how customer inquiries currently flow through the organisation. The chatbot launches, handles 15% of queries, and creates more work for the team managing the 85% it cannot handle. Six months later, the tool is quietly retired.
For North Holland SMEs, process mapping does not need to be a six-week exercise. But it needs to happen before any solution is proposed. A consultant who skips this step is optimising for speed of delivery at the cost of relevance.
Sign 3: The Consultant Talks Tools Before Problems
Listen carefully to what the consultant leads with in early conversations. Do they start with your business problems and work toward solutions? Or do they start with AI capabilities and work backward to find applications?
The second pattern -- "here is what AI can do; let us find where it fits" -- is a reliable predictor of an engagement that will produce demonstrations rather than business value. It is a solution looking for a problem, and it consistently produces AI implementations that are technically functional but operationally irrelevant.
A competent consultant for a North Holland SME should be spending the first engagement phase understanding your operational pain points, your margin structure, your team's capacity constraints, and your competitive context. The AI recommendations should follow from that understanding -- not precede it.
If the consultant's first deliverable is a "what AI can do for you" presentation rather than a "here is what we found in your operations" analysis, that is the shape of the engagement to come. Expect impressive demonstrations. Do not expect operational impact.
Sign 4: No Mention of the EU AI Act
Since January 2026, the EU AI Act has been in its enforcement phase. Any AI consulting engagement conducted for a Dutch company that does not address regulatory classification is incomplete at best and negligent at worst.
This does not mean every engagement needs a 50-page compliance document. It means the consultant should be able to classify your planned AI use cases under the Act's risk tiers -- unacceptable, high, limited, or minimal -- and explain what obligations follow from that classification. If your use cases include AI in hiring, customer scoring, credit assessment, or decision-support systems, you likely have high-risk classification obligations that affect how the system must be built, documented, and monitored.
A consultant who does not raise the EU AI Act in the first engagement is either unaware of it, does not consider it relevant (it is), or is deliberately avoiding a topic that might complicate their sales process. None of these are acceptable in 2026.
For North Holland SMEs, the EU AI Act is particularly important because these companies typically do not have in-house legal or compliance teams. The AI consultant is often the only external voice on both the technology and the regulatory context. That voice needs to be informed.
Sign 5: No Change Management Plan
AI adoption is a change management exercise disguised as a technology project. The tool is the easy part. Getting a team of 15 to 40 people to actually change how they work -- to trust AI outputs, to modify established routines, to learn new interfaces -- is the hard part.
A consultant who delivers an AI implementation without a change management plan is delivering a tool that will be under-used. The pattern is predictable: the system launches, adoption is enthusiastic for two weeks, daily usage drops as people revert to familiar workflows, and within three months the tool is used by one or two early adopters while the rest of the team works around it.
A change management plan for an SME does not need to be elaborate. But it should cover: Who will champion the tool internally? What training is needed? How will the team's concerns be surfaced and addressed? What does the adoption curve look like, and how will progress be measured? What happens when people get stuck?
If the consultant's engagement ends at deployment and does not include a plan for adoption, you are paying for a tool that your team will not use.
The Common Thread
All five signs share a root cause: the engagement is structured around delivering an AI system rather than solving a business problem. The consultant is optimised for the deliverable -- the tool, the model, the dashboard -- rather than the outcome.
For North Holland SMEs, where a failed AI engagement can cost six months of budget and an entire quarter of team attention, this distinction is not academic. It is the difference between an investment that compounds and a project that quietly disappears from the roadmap.
The 80%+ failure rate is not a technology problem. It is an engagement design problem. These five signs are visible in the first two weeks. Catching them early is significantly cheaper than discovering them at the end of a six-month engagement.
What to Do If You See These Signs
If you are already in an engagement and recognise one or more of these patterns, the most productive response is to pause and restructure -- not to cancel outright.
Request a data audit. Insist on process mapping before further solution recommendations. Ask the consultant to classify your use cases under the EU AI Act. Require a change management component before deployment.
If the consultant resists these additions or treats them as out of scope, that tells you something about the engagement's priorities. A consultant aligned with your business outcome will welcome the additional rigour. One aligned with their own delivery timeline will push back.
Book a call to discuss how to structure an AI engagement that avoids these failures
FAQ
What are the most common signs an AI consulting engagement will fail?
The five most reliable early warning signs are: no data audit in the first week, no process mapping before solutions are proposed, the consultant leads with AI capabilities rather than your business problems, no mention of EU AI Act compliance, and no change management plan for team adoption. All five are typically visible within the first two weeks.
Why do most AI projects fail for Dutch SMEs?
The failure rate exceeding 80% is rarely caused by technical problems. It results from poor engagement design: insufficient data assessment, solutions proposed without understanding business processes, regulatory obligations ignored, and no plan for team adoption. For SMEs with 10 to 50 employees, these structural problems are amplified because there is less organisational slack to absorb a misdirected investment.
Should I cancel an AI consulting engagement if I see red flags?
Not necessarily. The more productive response is to pause and restructure. Request a data audit, insist on process mapping, ask for EU AI Act classification, and require a change management plan. If the consultant resists these additions, that tells you whether the engagement is aligned with your business outcome or their delivery timeline.
How quickly can I tell if an AI consulting engagement is on the wrong track?
Most of these warning signs are visible within the first two weeks. If the consultant has not requested access to your data systems, has not begun mapping your business processes, and has not mentioned the EU AI Act by the end of week two, the engagement is structured for delivery rather than impact.

