Every large organization I support hits the same critical moment. After 12 to 18 months of experimentation, they have 5 to 10 working POCs, a motivated data team, and a C-suite that wants to "scale." This is precisely when most fail.
Going from 5 POCs to 70 use cases in production isn't a matter of multiplying by 14. It's a change in nature. Skills, processes, governance — everything that worked at lab scale must be rethought.
Here's what I've observed supporting this scaling journey in organizations of 5,000 to 50,000 employees.
Phase 1: The POC wall (months 0-6)
When I arrive in an organization, the landscape is always the same: 5 to 10 brilliant POCs, carried by passionate teams, with spectacular results — under controlled conditions.
The problem: none are in production. Or rather, some are "technically" in production — the model runs — but nobody actually uses it. The warning signs are everywhere.
The 3 systematic blockers
- The POC-production chasm: the POC worked with hand-cleaned data. Production demands an industrialized data pipeline, monitoring, fallback procedures. The effort is 5 to 10 times the POC itself.
- Absent governance: each BU launched its own initiatives, with its own tools, vendors, tech stack. Three different chatbots in the same company. No AI governance to rationalize.
- Adoption debt: POCs were built by data scientists, for data scientists. Business users were never involved in design. Result: powerful tools nobody understands.
What we do in Phase 1
Complete mapping of all AI initiatives: POCs, pilots, individual subscriptions, Shadow AI. The number always surprises. The last organization I audited had 47 AI initiatives for 15 officially identified.
Ruthless triage: for each initiative, three possible verdicts. Industrialize (data OK, real adoption, proven ROI). Pivot (good idea, execution needs rework). Stop (no business value or insufficient data). Typically, 30% industrialize, 30% pivot, 40% stop.
AI committee setup: 5 to 8 people, cross-functional, with decision-making power. The committee kills as many projects as it validates — that's its value.
Phase 2: Industrialization (months 6-12)
Once triage is done, the 3 to 5 retained initiatives enter industrialization. This is the most thankless phase — and the most critical.
The shared platform
The first workstream is technical: building a shared AI platform. Not another tool. A common foundation: data access, compute, monitoring, deployment, logging. Every team deploys on the same infrastructure instead of reinventing their own.
This workstream takes 3 to 4 months and costs significant money. But it divides the cost of each subsequent use case by 5. It's the highest-ROI investment in scaling.
The AI Champions network
In parallel, we deploy a network of AI Champions across BUs. And I mean seriously: 2 to 3 people per BU, with real training (not a Friday afternoon webinar), and 20% of their time dedicated. That's a significant investment. I initially underestimated it — on one of my first scaling missions, we appointed Champions without freeing up their time. Result: they did it on top of their day job, and after 2 months nobody had time anymore. Since then, dedicated time is non-negotiable.
Their concrete role: identify high-impact use cases in their perimeter, facilitate adoption, escalate field problems to the AI committee, and measure what really matters — real adoption, not the number of activated licenses.
The metrics that matter
At this phase, metrics change radically. You no longer measure model accuracy. You measure:
- Real adoption: how many daily active users? (not monthly)
- Business value: what measurable impact on the target KPI?
- Cost per use case: how much does deploying a new use case on the platform cost?
- Time-to-value: how long from idea to value in production?
Phase 3: Acceleration (months 12-24)
This is where the magic happens. With a shared platform, an active Champions network, and working governance, the deployment pace accelerates spectacularly.
The first industrialized use case takes 6 months. The tenth takes 3 weeks. What changed between the two? Not the complexity of the use cases — the maturity of the organization. Data pipelines are in place, Champions know how to qualify a use case, teams know how to deploy. Essentially, the organization moved from "we do AI" to "we know how to do AI." And that difference is measured in weeks, not months.
Going from 10 to 70
At this stage, the strategy evolves. You're no longer looking for "safe" use cases with immediate high ROI. You start exploring more ambitious applications:
- Cross-cutting use cases: a model used by multiple BUs. Predictive maintenance in one factory becomes a reusable framework across 12 sites.
- Agentic chains: AI agents collaborating for complex workflows.
- Embedded AI: AI integrated into existing business tools, invisible to the end user.
Phase 3 mistakes
Acceleration success creates its own traps:
- C-suite excitement: "If 20 use cases work, let's do 200." No. The organization's capacity to absorb change is limited. FOMO returns in a different form.
- AI technical debt: multiplying models without solid MLOps creates invisible technical debt. 70 models in production without monitoring means 70 ticking time bombs.
- Forgetting human change: in the rush to deploy models, organizations forget that each use case changes someone's daily work. M3K never stops — it accompanies every wave.
Key numbers of successful scaling
Here are the benchmarks I observe in organizations that succeed at scaling:
- POC-to-production survival rate: 25-35% (vs. 8% average per McKinsey)
- Cost of 10th use case: 20-30% of the first (platform effect)
- Time-to-value: from 6 months (first use case) to 3-4 weeks (after 12 months)
- Real adoption: >60% daily active users on industrialized use cases
- Portfolio ROI: positive from the 8th-10th industrialized use case
These numbers aren't consulting firm targets. They're field observations. And they vary — I've seen organizations go faster, and others never make it past phase 2. The difference rarely comes down to technology. It comes down to leadership's willingness to kill projects that aren't working, and to give teams time to learn.
What this changes for leadership
Scaling transforms the leadership role in a way most don't anticipate. The AI sponsor who validated POCs one by one must become a portfolio manager — someone who arbitrates between dozens of competing initiatives, who accepts killing what doesn't work, and who measures overall ROI rather than each individual model's performance.
That's a deep shift in posture. You go from "does this POC work?" to "is my AI portfolio creating value?" And frankly, most leaders I meet aren't ready for that question yet. That's where the real managerial transformation work begins, with tools like the M3K.
To go deeper, discover how AI governance structures scaling, and how the IAgile approach integrates scaling into its 6 founding principles.