When an AI project fails, the executive committee's first reaction is to look for a technical problem. The model wasn't good enough. The data was bad. The data team lacked skills. Sometimes that's true. But in most cases I see, the project was doomed before the first sprint — by decisions made in boardrooms, not Jupyter notebooks.
I'm not going to give you a miracle recipe. But I can tell you about the mistakes I see repeating, and what I do differently when I arrive early enough to prevent them.
Mistake 1: Starting with technology
"We want to deploy GPT-4." "We want to do generative AI." Nine out of ten briefs I receive start with a technology name. Not a problem.
So you end up looking for a problem that justifies the solution. And you always find one — it's just rarely the most important one. I supported a retail group that invested 6 months in a customer support chatbot "because everyone's doing it." Their real problem? A 35% team turnover rate. The chatbot is still there. So is the turnover.
What I do instead: before talking technology, I ask one question in the steering committee: "what problem is costing you the most this year?" Only then do we look at whether AI is the right answer. Often, it isn't.
Mistake 2: Delegating AI strategy to the CIO
AI isn't an IT topic. It's a business + tech + HR + legal topic. Delegating AI strategy to the CIO alone is like delegating digital strategy to the webmaster in 2005.
The CIO sees technical constraints. Business sees opportunities. Legal sees risks. HR sees human impact. Without this cross-functional view, you get an AI strategy that's actually an infrastructure strategy in disguise.
The question that unblocks: "who else should be in this room?" If the answer is "nobody, the CIO handles it," you've found the problem. AI governance must be cross-functional — 5 to 8 people representing business, tech, legal, and HR.
Mistake 3: Targeting full automation from the start
"AI will automate the end-to-end process." That's warning sign number 2 in my diagnostics. Full automation is a legitimate goal — but never a starting point.
When you announce full automation, you instantly trigger: union resistance, team fear, political pressure. The project becomes an HR issue before it becomes a technical one. And it dies from resistance long before proving its value.
What I do instead: optimize before transforming. Start in copilot mode — AI recommends, humans decide. Trust builds usage after usage. And usually, it's the teams themselves who end up asking for more automation. That's when you shift gears, not before.
Mistake 4: Ignoring data quality
"We have the data." Every time I hear that sentence in a kickoff, I know we're going to lose time. Having data and having usable data — that's the difference between having a garage and having a car that runs.
I documented a predictive maintenance project stopped after 4 months and 300,000 euros. The reason: sensors recorded every 5 minutes, but the model needed data every 10 seconds. Nobody had verified before starting.
The test I apply: 2 weeks of Discovery sprint dedicated to a data audit. No model, no code, no dashboard. Just one question: is the data sufficient, in the right condition, at the right frequency? That 2-week sprint is the highest-ROI Go/No-Go I know. It saved at least 3 projects from disaster last year.
Mistake 5: Confusing adoption with deployment
The model is in production. Licenses are distributed. The C-suite announces success. Six months later, nobody uses it. Or worse: teams use it for appearances but ignore its recommendations.
Deploying a tool and getting it adopted are two radically different competencies. The first is technical. The second is human, managerial, cultural. And it's the second that determines ROI.
What I learned: adoption is prepared from the first sprint, not after go-live. The M3K framework I use structures this around four axes — posture, practices, measurement, and capitalization — but the key point is that measurement focuses on comprehension, not logins. People log in out of obligation. They understand out of choice.
Mistake 6: Measuring the wrong ROI
"The model has 94% accuracy." So what? Model accuracy isn't ROI. It's a technical metric interesting for data scientists, but it says nothing about business value.
Misleading metrics I see in reports:
- "4 hours saved per week" — without measuring what employees do with those hours
- "500 documents generated" — without measuring how many are read
- "10,000 tickets auto-classified" — without measuring impact on resolution time
The rule I set at the start of every engagement: every AI project must be linked to a business KPI. Not model accuracy. Did time-to-resolution decrease? Did NPS increase? Did cost-per-transaction drop? If you can't make the link, you don't have an AI project — you have a tech project looking for its reason to exist.
The pre-launch diagnostic
Before investing the first euro in your AI strategy, verify these six points:
- Is the business problem identified and quantified before the technology choice?
- Does governance include business, tech, legal, and HR?
- Is the initial scope copilot mode (decision support), not autopilot?
- Is a 2-week data audit planned before any development?
- Does a change management plan exist from sprint 1?
- Are success KPIs business KPIs, not model metrics?
If you answer "no" to more than two questions, that's not a disaster — it's normal. Most organizations I meet start at 1 or 2 out of 6. What matters is becoming aware of it before committing the budget, not after.
To go deeper, discover the IAgile approach and its 6 principles, and why AI transformations fail because of managers, not technology.