Version française

Field Report: An AI-Augmented PI Planning — What Actually Changes?

Brian PLUS 2026-03-30 inspearit
Table of contents

I facilitated my first PI Planning in 2014. Since then, I have done over a hundred — at Orange, Renault, Allianz, mid-size industrial companies. The format is well-established. Two days, 80 to 150 people, sticky notes everywhere, dependencies identified on the wall, an increment plan coming out at the end. It works. But it could work much better.

Six months ago, I tried something different: integrating AI agents into the PI Planning process. Not to replace humans — to amplify what they already do. Here is what happened, without sugar-coating.

The context

An ART of 9 teams in the insurance sector, roughly 110 people. The recurring problem at every PI Planning: dependencies. With 9 teams, the number of potential dependencies explodes. Teams were spending half of day 1 manually identifying who depended on whom. And despite that, we always discovered unidentified dependencies mid-PI.

Second problem: capacity. Each team estimated their capacity in story points, but the references varied so much from one team to another that comparisons were useless. An "8" for Team Alpha was equivalent to a "13" for Team Delta.

What we tried

Agent 1: dependency analysis

We fed an agent with the history of the last 4 PIs: features, stories, identified dependencies, late-discovered dependencies. The agent was supposed to analyze the upcoming PI's features and predict likely dependencies before the PI Planning.

Result: the agent identified 23 potential dependencies. The teams validated 17 (74%). The remaining 6 were false positives — historical patterns that no longer applied. But here is the interesting part: the agent identified 4 dependencies that nobody had seen. Including a critical one between the backend team and the data team which, without this alert, would have blocked an entire PI objective.

Estimated time saved: 2.5 hours on day 1. But the real gain was the reduction in mid-PI surprises.

Agent 2: capacity calibration

This one did not work. The idea: use velocity history and delivery patterns to propose a realistic capacity per team. The problem: teams perceived it as a control tool, not a help. "Is the AI deciding how much we can deliver?" The resistance was immediate.

I should have seen it coming. Capacity is a sensitive topic — it is tied to team identity, pride, autonomy. Suggesting a number from "above" (even if it comes from an algorithm) is tantamount to questioning their competence. The issue is not technical — it is human.

We pivoted: instead of proposing a capacity, the agent shows the gaps between planned and delivered capacity over the last 4 PIs. Each team draws their own conclusions. It works better. Not perfectly — but better.

Agent 3: real-time synthesis

During the PI Planning, an agent captured key decisions, identified risks, and commitments, and produced a synthesis updated every hour. The RTE could at any point ask "where do we stand on open risks?" and get an answer in 30 seconds instead of combing through 40 sticky notes.

Team feedback: unanimously positive. Not because it was revolutionary — but because it eliminated a thankless task that nobody enjoyed doing.

What I take away from this

After this experience, I have three convictions.

First conviction: AI shines at synthesis and pattern matching, not at relational tasks. Identifying dependencies in data history, producing summaries — that is where AI adds the most value. As soon as you touch estimation, negotiation, the compromises between teams — that is human, and it should stay that way.

Second conviction: copilot mode is non-negotiable. The agent proposes, the human decides. You do not ask an agent to decide a team's capacity or prioritize features. You ask it to illuminate the human decision. This is the foundational principle of the IAgile approach — and PI Planning is the ideal place to put it into practice.

Third conviction: the most useful agent is the one that eliminates a chore, not the one that impresses. The real-time synthesis agent had nothing spectacular about it. But it was the most appreciated, because it removed a task everyone hated. The capacity calibration agent was technically more impressive — and it was rejected.

For the next PI Planning, we plan to add a cross-cutting risk detection agent. And this time, I know it needs to be presented as a tool for the teams, not a tool for management. The difference in positioning changes everything when it comes to adoption.

To go further on the SAFe AI-Native framework and how it structures AI integration into agile ceremonies.

Preparing a PI Planning and want to explore the AI option? 30 minutes to talk it through.

Book a slot →