News & Insights
Beyond legal: Artificial intelligence and energy usage revisited
Around this time last year, I wrote optimistically about the relationship between AI and energy usage. Like many, I was hopeful this technology would follow the path of technologies before it, where optimisation entailed energy reduction.
Unfortunately, in the intervening year, we have gone in the opposite direction. Increased efficiency has led to greater energy use, undercutting potential emissions savings.
This phenomenon has a name — Jevons Paradox.
Wait! You just used an em dash. You must have written this with an LLM 😒
Em dashes are used to add emphasis, set off extra information, or signal a break in thought. In this case, I have used it for added emphasis, attempting to draw the reader’s eye to the name of the paradox.
They are as canonical a grammatical tool as parentheses. And they have history. Take this passage from Shakespeare’s Hamlet, near the end of the To be or not to be soliloquy:
“And enterprises of great pith and moment
With this regard their currents turn awry,
And lose the name of action.—Soft you now!
The fair Ophelia! Nymph, in thy orisons
Be all my sins remember’d!”
Others have written in defence of the em dash. I’m not here to defend the em dash itself (although if anyone’s putting together a petition, you have my email). Rather, I point to the em dash as an example; the discourse around it shows just how ubiquitous the usage of LLMs has become in the last year. No matter your personal stance on AI, LLMs have fundamentally changed the way we think about language. You’re no longer allowed to use a common piece of grammar without raising eyebrows.
To put it bluntly, no matter your individual or organisational choices: your ability to express yourself is now shaped by AI. And that’s far from the only thing AI is shaping in each of our daily lives.
When technological advancements increase resource efficiency, it makes the resource cheaper to use, which paradoxically leads to higher overall consumption rather than conservation. When the increase in consumption only partially offsets the emissions saved, it is called the rebound effect.
If that felt jargon-y, here’s an example:
You buy a fuel-efficient car that costs half as much to drive. You planned to use it for your work commute. But now that driving is cheaper, you start driving to see your friends who live nearby on the weekend, when previously you would have taken the train. You end up using more fuel overall than you saved. And that fuel ends up having a bigger environmental impact than if you had just driven your original car sparingly.
This rebound effect/Jevons Paradox is in full force with large language models (LLMs). Tech companies are passing on their energy savings from more efficient model training to customers: encouraging them to generate more inane images and settle useless arguments for less. Efficiency improvements have manifested in reduced prices, and reduced prices have heightened demand. Heightened demand means we are seeing more energy usage than with the less efficient technology. In fact, NVIDIA shipped 3.7 million graphical processing units (GPUs) in 2024, more than a million additional units compared to 2023, despite the efficiency gains in that period.
While Jevons Paradox isn’t unique to AI, the compounding effect of unexpectedly resource-intensive inference makes this a different beast entirely. In 2025, the main talking point around energy and AI was that training posed the greatest environmental cost. Training involves exposing models to massive amounts of text and performing trillions of mathematical operations to iteratively adjust the parameters encoded in their neural architectures. Basically, it’s how models are built and taught. Once trained, energy consumption during inference (actual use) was expected to plummet. However, some places around the world now use more than 25 percent of their energy grid to power AI.
In reality, the energy cost has just shifted to inference. Inference refers to what happens when a user actually queries a model: the exploration of the neural space that the model must internally perform to generate the desired response. We are now seeing an estimated 80 to 90 percent of energy used on inference, in a direct contradiction to hopes that once the models were trained, we would see energy demand decrease. And, to make matters worse, inference has actually grown more energy-intensive.
The Gory Details on AI Energy Demand
In Virginia, USA, data centres now consume 26 percent of electricity
That’s six times the American average of 4.4 percent
And well above the global average of 1.5 percent
The International Energy Agency projects data centre electricity consumption will grow 15 percent annually from 2024 to 2030, “more than four times faster than the growth of total electricity consumption from all other sectors“
AI-related electricity consumption is expected to grow by as much as 50 percent annually from 2023 to 2030
Written in 2025, an S&P estimate guessed data centres across the USA would need “22% more grid power by the end of 2025 than they did one year earlier and will need nearly three times as much in 2030”
DeepSeek, the model I used as an entry point for discussion last year, turned out to have a more complicated relationship with energy than originally thought. When the model was first announced, its developers demonstrated a training process with dramatically lower resource intensity than that of competitors, leading some—including me—to hope that their method might provide a blueprint for greener AI. However, not only was their lesser-impact training predicated on the energy-intensive training of their predecessors, but the shallower architecture they piloted also required more effort during inference. Theirs was a reasoning model and, since its release, reasoning models have been embraced by many large AI players. Unfortunately, inference for these reasoning algorithms is more energy-intensive. To make matters worse, we are now facing a double whammy: reasoning models layered on top of training processes that were already highly resource-intensive, increasing energy use at both the training and inference stages.
Before the ‘reasoning revolution’, Google measured in 2019-2021 that approximately 60 percent of machine learning energy went to inference and 40 percent to training. Now, reasoning models produce one to two orders of magnitude (10 to 100 times) more output tokens per request than standard chat models. Compounding this is a double penalty: because of their long outputs, servers can’t run as large a batch size, preventing them from spreading energy costs across multiple requests. This means higher energy per token as well. OpenAI’s o1 costs six times more per token than GPT-4o. There have been no updated figures from Google, but we can expect inference now exceeds 60 percent. Taken together with Google’s decision to stop disclosing the exact proportion of its overall emissions due to AI (previously reported at 15 percent), a picture begins to emerge of inference-driven emissions growth that companies may be reluctant to quantify.
The reluctance to quantify emissions from AI companies is not imagined, and it goes beyond just Google. We have some proxy measurements of the scale of the energy and AI problem (See The Gory Details on AI Energy Demand), but the AI powers-that-be remain cagey about exactly quantifying their resource usage. So, while we know that energy from LLMs is increasing, and we know newer versions of LLMs are making things worse, it is difficult to hold anyone accountable, or take accountability as consumers to choose providers who prioritise sustainability. We are still at the beginning of the AI energy demand curve, but we don’t even know the shape of the curve we’re on.
AI Companies Aren’t Letting People Account for Their Carbon
The opacity from tech companies around their emissions warrants further examination. The trajectory tells the story. From 2010 to 2018, only 17 percent of models shared data that could be used to indirectly estimate environmental impact with no direct data at all. Things peaked in 2022 at 10 percent disclosure, but by early 2025, most notable AI models were back in the ‘no disclosure’ category.
Key details like which data centre processes your request, how much energy it takes, and how carbon-intensive the energy sources are remain known only to the companies running the models. And they face no incentives beyond public pressure to release that information.
Without proper disclosure, we’re left with methodological gymnastics: researchers scraping sustainability reports, multiplying by average electricity rates, producing 2030 projections that swing wildly from 700 to 1,400 terawatt-hours (TWh). This uncertainty isn’t a modelling problem; it’s a direct consequence of corporate non-disclosure.
When companies have disclosed figures, it has often been without transparency. Sam Altman said ChatGPT prompts consume roughly 0.34 Wh, but provided no measurement boundary or methodology, making that number impossible to verify or compare. Often, even when methods are released, they are outdated or narrow.
Individual and organisational choice about whether to use generative AI has dissipated. Last year, I wrote about how The Chancery Lane Project regularly made choices to use smaller models and avoid generative capabilities. In just one year, those capabilities have become so systemically ingrained in how we work, search, and communicate that, as a small-staffed non-profit in the increasingly AI-ified legal field, generation has shifted from being a choice to a fundamental component of work. In 2025, legal commentators argued that lawyers have a duty to competently understand AI, warned that law firms risk competitive disadvantage without adopting it, and observed that the conversation has moved from if and why to how.
Last year, I wrote: “the [AI] train has left the station. The sustainability community needs to decide whether we’re going to run to catch it.” This year, it feels more like the train is bearing down on us. And we’re still arguing about whether to move.
Further reading
- R. Poudineh and D. Apostolopoulou, “Artificial Intelligence and its Implications for Electricity Systems,” Oxford Institute for Energy Studies, no. 145, May 2025, Accessed: Feb. 09, 2026. [Online]. Available: https://www.oxfordenergy.org/publications/artificial-intelligence-and-its-implications-for-electricity-systems-issue-145/
- J. O’Donnell and C. Crownhart, “We did the math on AI’s energy footprint. Here’s the story you haven’t heard. | MIT Technology Review,” MIT Technology Review. Accessed: Feb. 09, 2026. [Online]. Available: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
- C. Bogmans, G. Ganpurev, P. Gomez-Gonzalez, G. Melina, A. Pescatori, and S. Thube, “Power Hungry: How Ai Will Drive Energy Demand,” 2025, doi: 10.2139/ssrn.5370933.
- A. S. Luccioni, E. Strubell, and K. Crawford, “From Efficiency Gains to Rebound Effects: The Problem of Jevons’ Paradox in AI’s Polarized Environmental Debate,” ACMF AccT 2025 – Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency, vol. 1, pp. 76–88, Jun. 2025, doi: 10.1145/3715275.3732007.
- “The ML.ENERGY Leaderboard.” Accessed: Feb. 09, 2026. [Online]. Available: https://ml.energy/leaderboard/
- S. Khan et al., “Green AI techniques for reducing energy consumption in AI systems,” Array, vol. 29, no. 7, p. 100652, Mar. 2026, doi: 10.1016/j.array.2025.100652.
- I. Energy Agency, “World Energy Outlook Special Report Energy and AI”, Accessed: Feb. 09, 2026. [Online]. Available: www.iea.org/terms
- “AIEnergyScore (AI Energy Score).” Accessed: Feb. 12, 2026. [Online]. Available: https://huggingface.co/AIEnergyScore
- “Energy-Aware Hosted Inference | Neuralwatt Portal.” Accessed: Feb. 12, 2026. [Online]. Available: https://portal.neuralwatt.com/
- “GreenPT – The green AI & privacy-friendly GPT Chat.” Accessed: Feb. 12, 2026. [Online]. Available: https://greenpt.ai/
- T. da S. Barros, F. Giroire, R. Aparicio-Pardo, and J. Moulierac, “Small is Sufficient: Reducing the World AI Energy Consumption Through Model Selection,” Oct. 2025, Accessed: Feb. 09, 2026. [Online]. Available: http://arxiv.org/abs/2510.01889
- “Artificial Intelligence’s Energy Paradox: Balancing Challenges and Opportunities,” Jan. 2025. Accessed: Feb. 09, 2026. [Online]. Available: https://reports.weforum.org/docs/WEF_Artificial_Intelligences_Energy_Paradox_2025.pdf
- E. Gibney, “Secrets of DeepSeek AI model revealed in landmark paper,” Nature, Sep. 2025, doi: 10.1038/d41586-025-03015-6.
- G. Kamiya and V. C. Coroamă, “Data Centre Energy Use: Critical Review of Models and Results,” Mar. 2025. Accessed: Feb. 09, 2026. [Online]. Available: www.iea-4e.org/edna
- G. Herring and S. Dlin, “Data center grid-power demand to rise 22% in 2025, nearly triple by 2030.” Accessed: Feb. 12, 2026. [Online]. Available: https://www.spglobal.com/energy/en/news-research/latest-news/electric-power/101425-data-center-grid-power-demand-to-rise-22-in-2025-nearly-triple-by-2030
- M. Jhaveri and V. Palat, “Measuring AI’s Energy/Environmental Footprint to Access Impacts,” Federation of American Scientists. Accessed: Feb. 09, 2026. [Online]. Available: https://fas.org/publication/measuring-and-standardizing-ais-energy-footprint/
- K. Ramachandran, D. Stewart, K. Hardin, G. Crossan, and A. Bucaille, “As generative AI asks for more power, data centers seek more reliable, cleaner energy solutions,” Deloitte. Accessed: Feb. 12, 2026. [Online]. Available: https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/genai-power-consumption-creates-need-for-more-sustainable-data-centers.html
- S. Luccioni, B. Gamazaychikov, T. A. da Costa, and E. Strubell, “Misinformation by Omission: The Need for More Environmental Transparency in AI,” Jun. 2025, Accessed: Feb. 09, 2026. [Online]. Available: http://arxiv.org/abs/2506.15572
- S. Luccioni, Y. Jernite, and E. Strubell, “Power Hungry Processing: Watts Driving the Cost of AI Deployment?,” 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024, pp. 85–99, Jun. 2024, doi: 10.1145/3630106.3658542.
