The last time I was asked to "set up an AI committee," the CEO wanted something lightweight, operational, and fast at making decisions. What he did not want: yet another PowerPoint committee that meets quarterly to sit through presentations. We agreed on the diagnosis. The challenge was execution.
Here is the process I use, refined across four different engagements. It takes 4 weeks. Not 4 months of strategic framing — 4 weeks of concrete work, with decisions being made as early as week 2.
Week 1: the uncomfortable mapping
Before building anything, you need to know what already exists. And that is where things get uncomfortable.
I ask each department to list all their AI initiatives. Not just the official projects — the individual ChatGPT subscriptions, the stealth POCs launched by an enthusiastic project manager, the AI tools embedded in SaaS products that nobody has audited. Shadow AI everywhere.
The number always surprises. Last time, a mutual insurance company with 8,000 employees had 47 AI initiatives underway. 15 officially identified. 32 in the shadows. Including 3 different HR chatbots, built by three teams that never talked to each other.
This mapping, however uncomfortable, is the foundation of everything. Without it, your AI committee will be steering blind.
Week 1 deliverable: a spreadsheet with all AI initiatives (project, owner, budget, status, data used, identified risks). One page. Not a 50-slide report.
Week 2: composing the committee (and this is where politics enters the room)
The composition of your AI committee determines its ability to decide. Too many people, and it becomes a discussion forum. Too few, and decisions lack legitimacy.
My rule: 6 people maximum. Not 6 titles — 6 people who actually show up, who have read the briefs, and who have the authority to say yes or no. Specifically:
- A C-level sponsor who understands AI (not just politically supports it). If your sponsor confuses an LLM with an ERP, you have a problem.
- A business representative from the most AI-advanced business unit. Someone who has touched data, not someone who talks about it in meetings.
- The CIO or their data delegate. For technical constraints, security, infrastructure.
- A legal/compliance representative. The AI Act is here. Better to integrate it from day one than to discover the obligations 6 months later.
- An HR representative. Because every AI deployment changes someone's daily work, and ignoring that is a recipe for failure.
- A facilitator (often my role). Someone who prepares the briefs, facilitates the debates, and ensures decisions are made and followed through.
I made the mistake once of caving to pressure and building a 12-person committee. It became a round-table where everyone presented their projects for 10 minutes. Zero decisions in 3 meetings. We cut back to 6. Decisions returned.
Week 2 deliverable: a one-page mandate (objectives, scope, decision-making authority, cadence) + the list of 6 members with their confirmed commitment.
Week 3: the first meeting that decides
The first AI committee meeting is decisive. If it looks like an information session, it is over — nobody will come back with the energy to decide.
My format: 90 minutes, 3 mandatory decisions.
The first 30 minutes: presenting the mapping (week 1). No debate — just the facts. The number of unidentified AI initiatives usually creates enough tension to make the rest productive.
The next 40 minutes: triage. For the 10 most expensive or risky initiatives, three possible verdicts — continue, pivot, stop. The committee must decide on each one. Not "we'll discuss it later." A decision.
The last 20 minutes: identify the 3 immediate risks (personal data in public LLMs? Missing AI Act classification? Costly duplicates?) and name an owner for each.
If your AI committee does not have the power to kill a project, it is useless. That is the test. And the first meeting is the opportunity to prove it.
Week 3 deliverable: a one-page summary with decisions made, owners named, deadlines set.
Week 4: establishing the rhythm
A committee that meets once a quarter is not a steering committee. It is a reporting ritual. The cadence I recommend: every two weeks for the first 3 months, then monthly once the governance is running smoothly.
Each 60-minute meeting follows the same format:
- 15 min: status on previous decisions (done / not done / blocked)
- 30 min: new requests to arbitrate (maximum 3 per session)
- 15 min: weak signals and emerging risks
Week 4 is also the time to set up tracking indicators. Not 40 KPIs — 5 at most:
- Number of active AI initiatives (goal: complete visibility)
- POC → production ratio (goal: >25%)
- Stopped initiatives (yes, this is a positive indicator — a committee that never kills anything is useless)
- Actual adoption of deployed tools (daily active users, not licenses)
- Open risks left untreated (goal: zero HIGH after 30 days)
Week 4 deliverable: a one-page dashboard with the 5 KPIs, first committee report to the executive team.
Mistakes I have made (so you don't have to)
Including too many people out of politeness. A director offended at not being on the committee is manageable. A 12-person committee that makes no decisions is fatal.
Not defining decision-making authority from the start. If the committee "recommends" but does not "decide," it will become a rubber-stamp body. The mandate must be explicit: the committee approves, rejects, or requests a pivot. Period.
Underestimating political resistance. Stopping an AI project means stopping someone's project. That person has a director. That director has opinions. If the C-level sponsor does not publicly support the committee's decisions, even the unpopular ones, the committee will lose credibility within weeks.
To go deeper on AI governance as a whole, read the article on the missing link of AI governance, and to understand why adoption fails when managers are not prepared, discover the M3K framework.