Version française

Cognitive Biases and AI: Your Worst Enemy in Transformation

Brian PLUS 2026-03-30 inspearit
Table of contents

Last week, in an AI steering committee, I watched a CEO greenlight a 350,000-euro budget based on an 8-minute demo. The model generated pretty dashboards. Everyone was impressed. Nobody asked what data it had been trained on.

The main risk in your AI project probably isn't technical. It's cognitive. Biases — those mental shortcuts that make us efficient in daily life — become dangerous when they drive decisions worth hundreds of thousands of euros. And with AI, they amplify in ways most organizations haven't seen coming.

I'm not a cognitive psychologist. But after years supporting AI transformations, I've learned to recognize the biases that kill projects. Here are five — and for each, what I concretely do to counter them.

Bias 1: The halo effect of AI demos

A model generates fluent text in 3 seconds. An agent summarizes 500 pages in 20 minutes. The room is impressed. The budget is approved. That's the wow effect in all its glory.

The halo effect occurs when the perceived quality of one dimension (text fluency, execution speed) contaminates our judgment of every other dimension (relevance, reliability, production feasibility). A successful demo doesn't prove a project is viable. It proves a controlled scenario works with selected data.

I watched an executive committee approve 400,000 euros based on a 12-minute demo. The model had been trained on a perfect dataset. In production, real data was 40% incomplete. The project was killed four months later.

What I do: after every demo, I ask three questions. What data was used? How does it differ from real data? What's the error rate on edge cases? Usually there's an awkward silence of a few seconds. And that silence is precisely what justifies the question.

Bias 2: Anchoring on the first POC

Anchoring bias makes us give disproportionate weight to the first piece of information received. In AI transformation, that first piece of information is often a POC — and it poisons everything that follows.

The POC showed 92% accuracy? That number becomes the mental reference for every decision-maker. When the production model drops to 74%, instead of evaluating whether 74% is sufficient for the use case, everyone asks "why did we lose 18 points?" The project is perceived as a failure even though it may already be creating value.

The reverse is equally dangerous: a failed first POC anchors the perception that "AI doesn't work here." Months of paralysis from a poorly designed test.

What I do: we define success criteria before the POC, not after. A POC at 92% has zero value if the business threshold is 95%. A POC at 70% can be a success if the current process runs at 55%. The framing prevents you from becoming prisoner to the first number you see.

Bias 3: Confirmation in use case selection

Confirmation bias means seeking information that confirms what you already believe. In AI transformation, it manifests in a particularly insidious way: organizations choose use cases that confirm the existing strategy rather than those that would create the most value.

The marketing director wants a chatbot? A marketing chatbot it is. The CIO is convinced by predictive maintenance? A maintenance POC it is. Not because these are the best use cases. Because they confirm the convictions of the most influential decision-makers.

I support organizations where AI use case selection resembles a political exercise disguised as a strategic one. Prioritization data exists — business impact, technical feasibility, data availability — but it's ignored as soon as it contradicts the dominant narrative.

What I do: I set up objective scoring (impact × feasibility × data quality) and publish it before the prioritization meeting. Any deviation from the ranking must be justified in writing. It doesn't eliminate politics — let's be honest — but it makes it visible. And visibility changes behavior.

Bias 4: AI as the artificial intern

This bias is subtler. It means using AI to accelerate broken processes instead of fixing them. Like a manager giving repetitive tasks to an intern without questioning whether those tasks should exist at all.

83% of employees spend a third of their week in meetings, but only 11% find them productive. Organizations' response? Deploy AI note-taking tools. The problem was never note-taking. The problem is that those meetings should never have existed.

This is the foundational principle of the IAgile approach: optimize before transforming. If your process is broken, AI won't fix it — it will make it broken faster.

What I do: before every AI project, I ask the question point-blank: "does this process deserve to be accelerated, or eliminated?" The answer is often "eliminated." But eliminating it takes managerial courage, and it's always easier to buy an AI tool than to cancel a director's meeting.

Bias 5: Survivorship bias in benchmarks

When an executive committee studies AI, they read success stories. Amazon. Google. Leboncoin. Companies that successfully integrated AI at scale. What they don't read: the hundreds of companies that tried the same thing and failed.

That's survivorship bias: seeing only those who succeeded and concluding that success is the norm. According to Gartner, over 85% of AI projects never reach production. The articles you read about AI successes represent the remaining 15%.

This bias pushes organizations to underestimate difficulties, overestimate their maturity, and copy strategies designed for tech-native companies in radically different contexts.

What I do: for every success benchmark a client shows me, I look for a failure in the same sector. "OK, Leboncoin has 70 AI features in production. But which comparable company tried and failed? What differentiates them from you?" If the client can't find any differences, they haven't looked hard enough. Or they don't want to find them.


Structural debiasing: embed in process, not in goodwill

Knowing your biases isn't enough. If it were, cognitive psychologists would never make judgment errors — and they make them as often as everyone else.

The only approach that works is structural: embedding debiasing in the decision-making processes themselves.

Here's what I concretely do with the IAgile approach:

The M3K framework integrates this dimension in the Mindset pillar: training managers not just to use AI, but to think with AI — which means recognizing their own biases toward this technology.

The 5-question diagnostic

Before your next AI steering committee, ask these five questions. If you answer "no" to more than two, your biases are driving your strategy.

  1. Did you define POC success criteria before launching it?
  2. Does use case selection follow a published, transparent scoring system?
  3. Have you actively sought failure examples in your sector?
  4. Was the process you want to augment with AI audited first?
  5. Can your managers name at least three limitations of the AI they're deploying?

Cognitive biases aren't defects. They're normal brain mechanisms. But in a context where every decision commits hundreds of thousands of euros and months of work, "normal" isn't enough. You need to be deliberate.

To go deeper, discover the IAgile approach and how it structures debiasing at every transformation phase, and the M3K framework to develop lucid leadership in the face of AI.

Want to discuss this? Book a free 30-minute call, no strings attached.

Book a free diagnostic →