Version française

AI Explainability: Why Nobody Uses Your Model

Brian PLUS 2026-03-30 inspearit
Table of contents

The model is deployed. Accuracy is excellent. The data scientists are proud. And six months later, business teams still aren't using the tool. Or worse: they use it but ignore its recommendations.

This scenario appears in half the AI transformations I support. The model works. Adoption doesn't. And the reason is almost always the same: nobody understands how the AI reaches its conclusions.

Let me be frank: if your model performs well but nobody uses it, the problem is not technology. The problem is that you're asking people to trust a black box. And in their shoes, you wouldn't either.

The story of the 89% model nobody used

I'll start with a story, because it sums up everything. A telecom operator with 12,000 employees deploys an XGBoost churn prediction model. Accuracy: 89%. The data team is proud, the executive committee greenlights deployment. Three months later, the field sales reps have stopped using it.

Why? Because when the model said "this client will leave," the reps didn't know why. You can't pick up the phone and say "hello, an algorithm tells me you're about to cancel." They needed the cause — billing issue, network quality, competitor offer — to build a credible retention action.

The model was replaced by a rule-based system, co-built with field teams. Less accurate — 72% — but used daily. I wondered for a long time whether that was a failure or a success. Today, I'm convinced the 72% model created more value than the 89% model nobody consulted. Raw performance means nothing without adoption.

What managers need to understand (and what they don't need to know)

I'm simplifying on purpose here, but explainability works like a spectrum. On one end, global transparency: the manager knows the churn model considers purchase frequency, complaints, and time since last contact. They don't know the exact weight of each factor, but they see the logic. That's enough for ticket sorting or content recommendations.

On the other end, local explanation: "this client will leave because purchase volume dropped 40% in 3 months and they filed 3 unresolved complaints." Now the sales rep can act. That's what every role needs when an AI decision requires human action behind it.

And then there's complete traceability — what data was consulted, what reasoning was followed, what alternatives were discarded. That's what the EU AI Act requires for high-risk systems (recruitment, credit, healthcare). Frankly, most organizations I see aren't there yet. But they'll have to get there, because it's now a legal obligation.

Why managers reject opaque AI

Managerial AI adoption is the bottleneck of most transformations. And rejection is rarely expressed openly. It takes subtle forms.

The first and most common: decorative usage. The manager opens the AI dashboard in meetings, shows the graphs, and makes decisions exactly as before. The tool serves to check the box "we're using AI" in the quarterly report.

The second: workarounds. The team maintains its own parallel Excel spreadsheets. So Shadow AI doesn't always come from a desire for clandestine innovation — it often comes from an official tool that nobody understands.

The third, and the most dangerous: blind delegation. Some managers follow AI recommendations without understanding them, creating massive cognitive debt. I saw a procurement manager apply his pricing tool's suggestions for 5 months without ever checking them. When the source data changed format, prices went haywire. Nobody noticed for 3 weeks.

In all three cases, the transformation fails because of managers. Not because they're incompetent — because they were handed a tool and told "trust us."

Explainability as a change management lever

In the work I do with the M3K framework, I eventually understood that explainability is not a technical topic. It's a managerial posture topic.

Training managers on AI doesn't mean teaching them to prompt (even if that can help). It means giving them the keys to understand what business logic the model follows, to know when to trust the result and when to question it, and above all to be able to explain to their team why an AI-assisted decision was made.

A manager who can explain "AI recommends we prioritize this client because their satisfaction indicators dropped 30% and they represent 15% of our revenue" generates trust. A manager who says "AI says to do this" generates rejection.

Principles for explainable AI in enterprise

Start simple, complexify later

Data teams are tempted to deploy the most performant model possible. Wrong strategy. Start with a natively interpretable model (business rules, decision trees, logistic regression). Adoption will build trust. And trust will enable progressive complexification.

Co-build with end users

Explainability isn't designed in a silo. What's "explainable" to a data scientist isn't to a salesperson. Involve business teams from design: what factors matter to them? What level of detail is useful? What format (tooltip, report, visual indicator) supports their decision-making?

Embed explanation in UX

Explainability shouldn't be a technical annex accessible to experts. It must be integrated into the interface: a visual confidence indicator, a sentence summarizing the reasoning, a link to contributing factors. If the explanation requires leaving the tool, nobody will consult it.

Measure comprehension, not just usage

Most organizations measure AI adoption by activated licenses or logins. That's insufficient. Measure comprehension: can users explain why the model made a given recommendation? Do they distinguish between reliable and uncertain results? That's the true adoption marker.


The AI Act and regulatory urgency

Explainability is no longer just good practice. The EU AI Act, in force since 2024, mandates transparency obligations for high-risk AI systems. This includes:

Organizations that don't integrate explainability from design face regulatory sanctions and major reputational risk. The GDPR precedent showed that after-the-fact compliance costs ten times more than building it in from the start.

The test I run in the first meeting

The last time I asked this question to a sales director — "can you explain to me how your AI tool arrives at its recommendations?" — he stared at me with a 10-second blank. Then he said: "Actually, no. But it works well." That sums up the entire problem.

If your managers can't explain the tool's logic, if they can't tell the difference between a reliable result and an uncertain one, and if they don't know what to do when facing a counter-intuitive recommendation — then your explainability is insufficient. Regardless of the model's accuracy.

I know it sounds counter-intuitive: slowing down deployment to invest in explainability. But a model nobody understands is a model nobody uses. And a model nobody uses, no matter how performant, is worth nothing.

To go deeper, discover the M3K framework for structuring managerial adoption, and the IAgile approach for integrating explainability from the Discovery phase.

Want to discuss this? Book a free 30-minute call, no strings attached.

Book a free diagnostic →