Version française

AI and CSR: The Carbon Cost Nobody Measures

Brian PLUS 2026-03-30 inspearit
Table of contents

I will be honest: it took me a while to face this topic head-on. When you spend your days helping companies deploy AI, you do not particularly want to calculate the carbon footprint of what you recommend. It is uncomfortable. But ever since a CSR director sat me down in front of his own numbers during a steering committee, I cannot pretend anymore.

So let us talk about it. Not in anti-tech activism mode, not in greenwashing mode either. Just the facts, the tensions, and what you can concretely do when you are responsible for an AI strategy in a company.

What AI actually consumes

Let us start with the topic everyone avoids in strategy presentations. Training a language model like GPT-4 consumes the energy equivalent of several hundred American households for an entire year. A University of Massachusetts study estimated in 2019 that training a single large NLP model emitted as much CO2 as five cars over their full lifetimes. And since then, models have grown by a factor of 100.

But — and this is where many people stop — training is only part of the problem. Inference, meaning every query you send to an LLM, also consumes energy. Multiply that by millions of daily users, and inference now represents the majority of AI's energy consumption in production.

Then there is water. Data centers consume massive amounts of water for cooling. Microsoft reported that its water consumption increased by 34% in a single year, largely attributed to its AI investments. Google, same trend. In regions already under water stress, this is a concrete problem, not a theoretical debate.

And then there are the GPUs themselves. Rare earth materials, mining, the limited lifespan of specialized chips that end up as electronic waste. A hardware renewal cycle accelerated by the race for performance.

The measurement problem

Frankly, what surprised me most when digging into this was how nobody measures anything. I asked about fifteen CIOs: "What is the carbon impact of your AI deployments?" Zero had a number. Not one. Some had beautiful CSR reports about recycled paper and business travel, but nothing on the energy consumption of their API calls to LLMs hosted in the United States.

The problem is structural. When you use a model via an API (OpenAI, Anthropic, Google), you have zero visibility into the underlying infrastructure. You do not know if the data center runs on coal or nuclear. You do not know how many GPUs are mobilized for your query. The provider does not tell you — because they often do not know precisely themselves.

So the CSR reports of companies massively deploying AI have a gaping hole: digital scope 3. Everything consumed by your cloud and AI providers that appears nowhere in your reports.

I used to think the problem was simple. It is not.

My first reaction was to think: look, we will just be more frugal. Fewer queries, smaller models, local fine-tuning. That is true — and we will come back to it. But I quickly realized the tension runs deeper.

Because AI is also a formidable tool for sustainability. Energy optimization models that reduce building consumption by 15 to 30%. Satellite-based climate monitoring that detects deforestation in real time. AI applied to environmental tracking at a scale impossible to achieve manually. Energy demand forecasting models that reduce waste on power grids.

So what do we do? Stop everything because it consumes energy? Obviously not. But we cannot wave the topic away by saying "yes but AI also helps the environment." That is too easy. The truth is that we need to measure both sides of the equation and make informed decisions.

What I concretely recommend

Here is what I now integrate into my AI governance engagements, and what I recommend to any company that takes CSR seriously.

Measure before you deploy. Before each new AI use case, estimate the consumption. Tools like CodeCarbon, ML CO2 Impact, or cloud provider calculators give orders of magnitude. It is not perfect, but it is infinitely better than zero. Include this estimate in your business case, on the same footing as financial cost.

Choose the right model size. I have seen teams use GPT-4 to classify emails into three categories. A fine-tuned 7-billion parameter model did the same job, 50 times cheaper and 50 times less energy-intensive. Sobriety starts with using the right tool — not the most impressive one.

Question your cloud provider. Where are the data centers? What energy mix? What carbon commitments? If your provider cannot answer, that is a signal. Not a signal to switch immediately — but to start asking the right questions.

Integrate digital sobriety into your AI governance. The AI committee should not only steer use cases and budgets. It should also track the environmental footprint of the AI portfolio. A simple indicator: estimated energy consumption per use case, updated quarterly.

Favor local inference when possible. A model deployed on-premise or at the edge, calibrated for your specific use case, consumes a fraction of what an API call to a generalist model hosted on the other side of the world consumes. It is also better for latency and data privacy.

Sobriety as a competitive advantage

I will end with a conviction that emerged gradually. Companies that integrate digital sobriety into their AI strategy do not do it out of pure virtue — they find a concrete advantage in it.

First, regulation is coming. The CSRD in Europe already mandates detailed extra-financial reporting. It will not be long before digital scope 3 is explicitly required. Those who already measure will have a head start.

Second, sobriety often coincides with economic efficiency. A smaller, better-calibrated model costs less to operate. Fewer tokens consumed, lower cloud bills. Carbon optimization and financial optimization often go in the same direction.

Finally, for companies recruiting tech talent — and in 2026, who is not? — the environmental stance matters. Developers and data scientists of the current generation ask the question. I see more and more of them in interviews asking about the company's green AI policy.

To be entirely frank, I do not have a perfect answer to this tension. I continue to recommend AI deployments, because I believe in their value for organizations and sometimes for the planet. But I now refuse to do so without raising the question of environmental cost. It is the bare minimum when talking about responsible transformation.

Want to integrate digital sobriety into your AI strategy? 30 minutes to discuss it concretely.

Book a slot →