Version française

From 5 to 70 AI Use Cases: A Scaling Field Report

Brian PLUS 2026-03-30 inspearit
Table of contents

Every large organization I support hits the same critical moment. After 12 to 18 months of experimentation, they have 5 to 10 working POCs, a motivated data team, and a C-suite that wants to "scale." This is precisely when most fail.

Going from 5 POCs to 70 use cases in production isn't a matter of multiplying by 14. It's a change in nature. Skills, processes, governance — everything that worked at lab scale must be rethought.

Here's what I've observed supporting this scaling journey in organizations of 5,000 to 50,000 employees.

Phase 1: The POC wall (months 0-6)

When I arrive in an organization, the landscape is always the same: 5 to 10 brilliant POCs, carried by passionate teams, with spectacular results — under controlled conditions.

The problem: none are in production. Or rather, some are "technically" in production — the model runs — but nobody actually uses it. The warning signs are everywhere.

The 3 systematic blockers

What we do in Phase 1

Complete mapping of all AI initiatives: POCs, pilots, individual subscriptions, Shadow AI. The number always surprises. The last organization I audited had 47 AI initiatives for 15 officially identified.

Ruthless triage: for each initiative, three possible verdicts. Industrialize (data OK, real adoption, proven ROI). Pivot (good idea, execution needs rework). Stop (no business value or insufficient data). Typically, 30% industrialize, 30% pivot, 40% stop.

AI committee setup: 5 to 8 people, cross-functional, with decision-making power. The committee kills as many projects as it validates — that's its value.

Phase 2: Industrialization (months 6-12)

Once triage is done, the 3 to 5 retained initiatives enter industrialization. This is the most thankless phase — and the most critical.

The shared platform

The first workstream is technical: building a shared AI platform. Not another tool. A common foundation: data access, compute, monitoring, deployment, logging. Every team deploys on the same infrastructure instead of reinventing their own.

This workstream takes 3 to 4 months and costs significant money. But it divides the cost of each subsequent use case by 5. It's the highest-ROI investment in scaling.

The AI Champions network

In parallel, we deploy a network of AI Champions across BUs. And I mean seriously: 2 to 3 people per BU, with real training (not a Friday afternoon webinar), and 20% of their time dedicated. That's a significant investment. I initially underestimated it — on one of my first scaling missions, we appointed Champions without freeing up their time. Result: they did it on top of their day job, and after 2 months nobody had time anymore. Since then, dedicated time is non-negotiable.

Their concrete role: identify high-impact use cases in their perimeter, facilitate adoption, escalate field problems to the AI committee, and measure what really matters — real adoption, not the number of activated licenses.

The metrics that matter

At this phase, metrics change radically. You no longer measure model accuracy. You measure:

Phase 3: Acceleration (months 12-24)

This is where the magic happens. With a shared platform, an active Champions network, and working governance, the deployment pace accelerates spectacularly.

The first industrialized use case takes 6 months. The tenth takes 3 weeks. What changed between the two? Not the complexity of the use cases — the maturity of the organization. Data pipelines are in place, Champions know how to qualify a use case, teams know how to deploy. Essentially, the organization moved from "we do AI" to "we know how to do AI." And that difference is measured in weeks, not months.

Going from 10 to 70

At this stage, the strategy evolves. You're no longer looking for "safe" use cases with immediate high ROI. You start exploring more ambitious applications:

Phase 3 mistakes

Acceleration success creates its own traps:


Key numbers of successful scaling

Here are the benchmarks I observe in organizations that succeed at scaling:

These numbers aren't consulting firm targets. They're field observations. And they vary — I've seen organizations go faster, and others never make it past phase 2. The difference rarely comes down to technology. It comes down to leadership's willingness to kill projects that aren't working, and to give teams time to learn.

What this changes for leadership

Scaling transforms the leadership role in a way most don't anticipate. The AI sponsor who validated POCs one by one must become a portfolio manager — someone who arbitrates between dozens of competing initiatives, who accepts killing what doesn't work, and who measures overall ROI rather than each individual model's performance.

That's a deep shift in posture. You go from "does this POC work?" to "is my AI portfolio creating value?" And frankly, most leaders I meet aren't ready for that question yet. That's where the real managerial transformation work begins, with tools like the M3K.

To go deeper, discover how AI governance structures scaling, and how the IAgile approach integrates scaling into its 6 founding principles.

Want to discuss this? Book a free 30-minute call, no strings attached.

Book a free diagnostic →