What a First 90 Days of AI Adoption Should Actually Look Like for a 10 to 50 Person Company
What a First 90 Days of AI Adoption Should Actually Look Like for a 10 to 50 Person Company
TL;DR: Enterprise 90-day AI sprints do not work for small teams. Here is what a realistic first 90 days of AI adoption should actually look like for a 10 to 50 p…
Most "90-day AI transformation" frameworks were designed for enterprises with 500-plus employees, dedicated data teams, and an IT department that can absorb a parallel workstream. When a 25-person professional services firm or a 40-person logistics company tries to follow that playbook, the result is predictable: the plan stalls around week three because nobody has the bandwidth to execute it alongside their actual job.
The problem is not that 90 days is the wrong timeframe. It is that the enterprise version of 90 days assumes resources, governance structures, and change management capacity that small teams simply do not have. For a 10-to-50 person company in the Netherlands, the first 90 days of AI adoption is not a sprint to a deliverable. It is the foundation of an operating capability — the ability to identify, test, and absorb AI tools into how the company works, repeatedly, without external dependency.
Here is what that actually looks like, week by week.
Month 1: Audit and Classify (Weeks 1-4)
The first month is not about AI. It is about understanding what you have.
Week 1-2: Process and Data Inventory
Map the five to eight core workflows that consume the most time, involve the most repetition, or create the most friction. These are not always obvious. The CEO may think the bottleneck is client reporting. The operations lead may know it is actually supplier invoice reconciliation. Both perspectives need to be surfaced.
For each workflow, document: who does it, how long it takes, what data it touches, where the data lives, and how often it runs. This does not require a consultant or a specialised tool. A shared spreadsheet and three to four one-hour conversations with team leads will produce a usable inventory.
Week 3-4: Classify and Prioritise
Not every workflow is an AI candidate. Classify each one:
Ready now: the process is well-defined, the data is digital and accessible, and the team is open to change. These are your first candidates.
Ready after preparation: the process is a good fit but the data is messy, trapped in a legacy system, or manually maintained. These need data work before AI applies.
Not an AI problem: the process is broken in ways AI will not fix — unclear ownership, inconsistent execution, or a policy issue rather than an efficiency issue. Fix these with process design, not technology.
By end of month one, you should have a ranked list of two to three workflows that are genuinely ready for a first AI experiment — and a clear understanding of why the others are not ready yet.
Month 2: First Workflow Automation (Weeks 5-8)
Month two is where you run your first controlled experiment. One workflow. One tool. One measurable outcome.
Week 5: Define the Experiment
Select the highest-ranked workflow from your month-one classification. Define three things before touching any tool:
- Baseline metric: how does this workflow perform today? Time per unit, error rate, cost per cycle — pick one primary metric and measure it before you change anything.
- Target threshold: what improvement would justify continuing? Be specific. "Faster" is not a threshold. "Processing time under 5 minutes per invoice with fewer than 3% errors" is.
- Scope boundary: who will use this, on what data, for how long? A first experiment should involve two to four people, not the entire company.
Week 6-7: Run the Experiment
Deploy the AI tool against the selected workflow with the defined scope. This is not a proof of concept designed to impress stakeholders. It is a controlled test designed to produce a measurable result against your baseline.
During the experiment, track three things: the primary metric against your baseline, any exceptions or failures the tool produces, and the team's qualitative experience — is the tool actually usable in their daily routine, or does it add friction?
For most sub-50 person companies, the right tool for a first experiment is a commercially available AI product — not a custom model. Document processing, email triage, meeting summarisation, customer query classification — these are solved problems with off-the-shelf tools that require configuration, not engineering.
Week 8: Evaluate and Decide
Compare results against your threshold. Three outcomes are possible:
Continue: the metric improved past the threshold, exceptions are manageable, the team finds it usable. Move to month three with this workflow and begin planning the next one.
Adjust: the results are promising but below threshold. Identify whether the gap is tool configuration, data quality, or workflow fit. Adjust and extend the experiment by two weeks.
Stop: the tool does not improve the metric, creates too many exceptions, or the team cannot integrate it into their routine. Document the learning. This is not failure — it is a data point that prevents a larger investment in the wrong direction.
Month 3: Measure and Iterate (Weeks 9-12)
Month three is where the operating capability begins to form.
Week 9-10: Stabilise the First Workflow
If the month-two experiment succeeded, the first two weeks of month three are about making it stick. This means:
- Documenting the workflow with the AI tool integrated — not as a technical manual, but as a team-facing operating procedure that describes what the tool does, what the human does, and what to do when the tool is wrong
- Expanding access if the initial scope was limited to a subset of the team
- Confirming the EU AI Act classification for this use case — most internal productivity use cases fall into the minimal risk tier, but the classification should be documented. This takes less than an hour and creates a governance record that matters if your AI usage expands
Week 11-12: Begin the Second Experiment
With one workflow successfully automated and the team experienced in the evaluation process, begin the cycle again with the second-ranked workflow from your month-one classification.
This second cycle will be faster. The team now understands what a good AI experiment looks like: defined baseline, clear threshold, controlled scope, honest evaluation. That process knowledge is more valuable than any individual tool deployment.
What You Should Have After 90 Days
At the end of 90 days, a 10-to-50 person company following this approach should have:
- A process and data inventory that identifies which workflows are AI-ready and which are not
- One workflow with AI integrated and stabilised — producing measurable value, documented, with EU AI Act classification on file
- A second experiment in progress — applying the same structured approach to the next candidate
- An internal evaluation capability — your team now knows how to assess an AI tool against a real workflow, which means you are no longer dependent on vendor demonstrations or consulting recommendations to make adoption decisions
This is not a transformation. It is not a strategy deck. It is an operating capability — the ability to evaluate and absorb AI tools on your own terms, at your own pace, with evidence rather than enthusiasm driving decisions.
What This Approach Deliberately Avoids
It avoids the big-bang rollout. Deploying AI across multiple workflows simultaneously in a small company splits attention, creates competing priorities, and makes it impossible to attribute results to any single change.
It avoids custom model development. For most sub-50 employee companies, the first year of AI adoption should use commercially available tools, not bespoke models. Custom development makes sense once you have validated the use cases and understand your data — not as a starting point.
It avoids treating AI as a project. Projects have end dates. The goal of the first 90 days is to establish a repeatable cycle — audit, classify, experiment, evaluate, stabilise — that your team can run independently. If the process stops when the consultant leaves, it was a project, not a capability.
Frequently Asked Questions
How long does AI adoption take for a 10 to 50 person company?
The first 90 days should produce one stabilised workflow with AI integrated, a second experiment underway, and an internal evaluation capability. Full maturity — where AI is a routine part of how the company operates across multiple workflows — typically takes twelve to eighteen months of sustained, iterative adoption. The first 90 days establish the foundation.
What should a small company automate first with AI?
Start with the workflow that scores highest on three criteria: it is well-defined and consistently followed, the data is digital and accessible, and the team is open to trying something new. Common first candidates include document processing, email triage, meeting summarisation, and customer query classification — all of which have mature, commercially available AI tools.
Do I need a consultant for the first 90 days of AI adoption?
Not necessarily for the full 90 days — but a structured engagement at the start of month one (process audit and classification) and at key decision points (experiment design, EU AI Act classification) can significantly improve the quality of decisions. The goal is to build internal capability, not to create a permanent consulting dependency.
How do I measure whether AI adoption is working?
Define a baseline metric for the target workflow before introducing any AI tool, set a specific improvement threshold, and measure against it at the end of the experiment period. Qualitative team experience matters too — a tool that improves a metric but adds friction to the daily routine will not sustain adoption.
Further Reading
- From Pilot to Production: Why Dutch SMEs Get Stuck After the AI Proof of Concept
- When Not to Buy AI Consulting Yet
- How to Run an Internal AI Pilot Without Creating Governance Debt
- What an AI Readiness Assessment Should Cover
Start Your First 90 Days With Structure
If your team is ready to begin AI adoption but wants expert guidance on the audit, classification, and first experiment design, our AI Consulting engagement is designed for exactly this phase — right-sized for small teams, focused on building internal capability rather than creating dependency.
If you are not yet sure whether your team and data are ready to start, an AI Readiness Assessment will tell you what needs to be in place before the 90-day clock starts.

