Version française

4 Warning Signs Your AI Project Will Fail

Brian PLUS 2026-03-30 inspearit
Table of contents

An enterprise AI project costs between 900,000 and 1.8 million euros on average. According to Gartner, over 85% never reach production. McKinsey confirms: only 8% of organizations successfully deploy AI at scale.

These numbers are not inevitable. They're symptoms. And in the field — in PI Planning, in steering committees, in team retrospectives — I've learned to recognize the signals long before a project officially derails.

Here are the four most reliable warning signs. If you spot even one in your organization, stop everything and fix it before you keep investing.

Sign 1: "We have the data"

Every time I hear this sentence in a kickoff, I know we're going to lose time. A confident sponsor, a PowerPoint slide with a database schema, and everyone nods. The project starts. Three months later, the data team discovers reality.

The data exists, yes. But it's in 14 different systems. With incompatible formats. Duplicates everywhere. Fields empty 40% of the time. Undocumented business rules that mean the same column doesn't mean the same thing across entities.

I watched a predictive maintenance project at a manufacturing company stop dead after 4 months. Reason: the sensor data existed, but nobody had verified the sampling frequency. The models needed data every 10 seconds. The sensors recorded every 5 minutes. Four months and 300,000 euros to discover that.

How I detect it: I ask "who did the data audit?" in the kickoff. If there's silence, or someone answers "we'll handle that in sprint 2," I know we're heading for a wall. When "data quality" doesn't appear in any backlog and the data team wasn't consulted during scoping, the project is already on borrowed time.

What I recommend: a 2-week discovery sprint, dedicated exclusively to data audit. No model, no code, no POC. Just: do we have what we need, in the condition we need it, at the frequency we need it? If the answer is no, you just saved 6 months.

Sign 2: "AI will automate everything"

When an executive committee announces that AI will "automate the end-to-end process," I already know there's going to be a problem. Not because complete automation is impossible. But because it's rarely the right starting point.

The natural reflex: identify an expensive process, imagine a world where AI does it instead of humans, calculate the theoretical ROI, launch the project. It's seductive. It's also the fastest path to failure.

The problem is twofold. First, complete automation requires a maturity of data and process that very few organizations possess. Second, it triggers maximum change resistance: teams feel threatened, unions mobilize, the project becomes political before it becomes technical.

The IAgile approach I apply is different: we never start by transforming. We start by optimizing. AI as decision support, not replacement. A copilot, not an autopilot.

Concretely: instead of automating customer complaint analysis end-to-end, start by suggesting a categorization that the human agent validates or corrects. The human stays in the loop. Trust builds. Training data improves through feedback. And six months later, the team itself asks to automate the simple cases.

The telltale signs: an initial scope targeting full automation with no intermediate decision-support phase. ROI calculated on headcount replacement rather than capacity augmentation. And often, a sponsor who says "we'll do like Amazon" without ever having set foot in a logistics warehouse.

Sign 3: "The teams are ready"

No, they're not. And that's not a criticism — it's a structural reality.

I've supported dozens of transformations. In every one, the same pattern repeats. Leadership thinks teams are enthusiastic because they attended an impressive demo. In reality, behind the initial fascination, there's fear, confusion, and above all a void: nobody has explained what this concretely changes in their daily work.

Managers are hit first. AI is an amplifier: it reveals and amplifies both the strengths and weaknesses of the management in place.

Without a change management framework, here's what happens: early adopters advance alone, disconnecting from the rest of the team. Skeptics entrench. Middle managers — those who should carry the change — don't know what to do. The project fragments.

That's why I designed the M3K framework: Mindset, Methods, Metrics, Knowledge. Four dimensions to work simultaneously. Not a one-off training. A continuous journey that addresses posture (Mindset), practices (Methods), progress measurement (Metrics) and capability building (Knowledge).

The pattern I spot: no change management plan. No measurement of real adoption (beyond activated licenses). No adapted team rituals. Managers who have never used the tool they're supposed to deploy.

The ultimate test: ask a middle manager to show you how they use AI in their daily work. If they can't, adoption is an illusion.

Sign 4: "The POC worked"

The most insidious trap. Because it comes with proof. Metrics. Graphs going up. The executive committee is convinced, budget is unlocked, we go to production.

And then everything collapses.

A POC works because it's protected. Hand-cleaned data. Controlled environment. Dedicated team. Use cases selected to succeed. That's its job: prove technical feasibility. But technical feasibility is only 20% of the problem.

The remaining 80% — the ones that kill in production:

The pattern I spot: a POC validated without a production plan. No MLOps. No model performance monitoring. No retraining strategy. No runbook for degradation scenarios.

The pressure to move fast pushes organizations to confuse a proof of concept with a proof of value. They are two radically different things.


De-risking at every phase

These four warning signs are not inevitabilities. They're checkpoints. The IAgile approach I apply with my clients structures de-risking in three phases.

Discovery phase (weeks 1-4)

Build phase (weeks 5-16)

Scale phase (weeks 17+)

The IAgile checkpoint system

Every phase transition passes through a formal checkpoint. Not a validation meeting where everyone says yes. A real decision point with objective criteria.

Discovery → Build checkpoint: Is the data sufficient? Is the scope realistic? Are stakeholders aligned? Does the change management plan exist?

Build → Scale checkpoint: Does real adoption exceed 60%? Does the model perform on unseen data? Is production infrastructure ready? Is the run team trained?

If a single criterion is red, we don't proceed. We fix first. It's counterintuitive in a "go fast" culture, but it's what makes the difference between the 8% that succeed and the 85% that fail.


These warning signs didn't come from a book. I learned them in PI Planning, watching entire SAFe trains mobilize on poorly scoped AI projects. In steering committees, watching sponsors defend POCs they confused with products. In retrospectives, listening to teams explain how they knew from the first sprint it wouldn't work, but nobody listened.

The cost of a failed AI project isn't just financial. It's the team's trust in the next initiative. It's management credibility. It's lost time that won't come back.

So before investing 900,000 euros in your next AI project, invest 4 weeks in a real diagnostic. You'll know exactly where you stand. And you'll be able to decide based on facts.

To go deeper, discover the IAgile approach: 6 principles for merging agility and artificial intelligence, and the M3K framework to structure managerial transformation.

Want to discuss this? Book a free 30-minute call, no strings attached.

Book a free diagnostic →