Sign up to our newsletter

Welcome to See Through News

Speeding Up Carbon Drawdown by Helping the Inactive Become Active

[wpedon id=3642]

Is AI Too Toxic For Climate Activists?

AI carbon drawdown emissions climate activist LLM large language model technology machine learning ethics usage training inference

AI, initially Good Robot saviour, is increasingly cast as Bad Robot threat. Does its energy- and water-hungry environmental impact mean effective, responsible, pragmatic green activists should reject this new technology outright?

This article, part of a series on See Through’s policy of using humanity’s God-like technology to prolong our existence, rather than truncate it, asks if and when it’s OK for environmental activists to use AI.

The power and the water

Search the See Through News website for articles on AI, and you’ll find that ever since its 2021 launch, See Through has been waving a red flag about the technology’s emissions impact:

And another thing…

The AI revolution’s environmental challenge isn’t limited to competition for limited resources.

Other See Through News articles have examined its democratic, cultural, legal, and financial impacts. 

Whatever the focus, See Through has questioned whether this technology, whatever its advertised potential, is actually a net help/hindrance in tackling our real-world carbon crisis. The answer has rarely been positive.

With questions about machine learning’s voracious energy and water appetite getting louder as our Silicon Valley Overlords jostle for chatbot dominance, what’s an effective activist to do?

AI is everywhere

Like plastic in the 1960s, AI is being inserted deeper and deeper into our daily lives, whether we like it or not. 

Within a few years, it has gone from being an obscure denizen of university computer basements to an omnipresent and persistent sommelier. 

Whatever you do online, this ingratiating and relentless sommelier appears at your elbow.

Occasionally, when you need a glass of something, they’re welcome. More often than not, they’re annoying.

Look up a cast list, book a table, find a video someone mentioned, order some dog food, check a recipe… some chatbot icon appears, uninvited. 

All you want is the name of that actor who was in that thing, to make sure your anniversary goes smoothly, ensure Fido gets fed, check how to make a bechamel sauce… and that bloody waiter keeps butting in to enquire whether you want a glass of wine.

It’s as if the internet people have invested a huge amount of money in huge warehouses of wine, and are struggling to shift it.

Big tech has the whiff of a solution in search of a problem, or someone desperately trying to appear useful, but we turn out not to have much choice in the matter.

What can effective environmental activists learn about if, when and how to take up their offer?

First thing is to work out if it’s ‘worth’ it, i.e. whether the benefits outweigh the costs.

It’s just maths

In principle, knowing whether to use AI for any given task is mathematically trivial.

It’s a simple cost/benefit analysis

For the carbon Cost, employ the standard formula:

Carbon Emissions = Compute Hours × Power Draw × Power Usage Effectiveness  × Grid Carbon Intensity

Plug in four numbers, and multiply them.

For the carbon Benefit, do a similar calculation of the net carbon drawdown resulting from using AI in your particular project or application. Both use the same unit, tonnes of carbon dioxide equivalent reduced or sequestered (CO2e).

Subtract the Cost from the Benefit. If the resulting mass of emissions is positive, consult the wine list and order whatever suits.

If the number is negative, stick to tap water.

‘Is it OK for me to use LLMs?’

In practice, such calculations are fiendishly complex.

Predicting the total lifetime carbon drawdown benefit of any of See Through Together’s dozens of projects is hard enough. Any projection of an untested original concept involves a lot of guesswork.

Predicting the carbon cost should require much less guesswork. Cost calculations should be based on existing, proven technology and performance. Hard numbers. Established conversion factors. Standard figures.

An environmentally-conscious, responsible adult asking ‘Is it OK for me to use ChatGPT?’ deserves a simple answer. 

But try a (relatively low-carbon) Google search – or a (relatively high-carbon) LLM enquiry, – and you’ll be left even more bamboozled and baffled.

AI summaries – if you trust them – might put the emissions cost at ‘up to a few grammes per enquiry’.

Is that a lot, or not?

How far to the nearest pub?

Skip to the bottom of the AI summary that annoying chatbot sommelier thrusts under your nose, and things get even murkier. 

The chatbot might mention in passing that depending on this or that, the carbon cost of an LLM enquiry could actually be anywhere from hundredths of a gramme to several grammes.

An answer that spans three orders of magnitude should not inspire confidence.

Imagine asking a stranger how far it is to the nearest pub, and being told ‘something between 300 metres and 30km’. Is this actionable information? What would you do with this answer?

If you’re still determined to accurately measure the emissions you generate every time you use an LLM, you might scroll down to the articles recommended by familiar ‘standard’ web search tools.

But bear in mind the articles at the top of the list are probably the ones who’ve paid the most to be there. The ones below could be algorithmically calculated to confirm your existing biases.

Still, the most diligent environmental activist might persist, and seek out deep-dive articles by impressively-qualified bloggers. 

They’ll take you through the sums, explain what ‘Power Usage Effectiveness’ actually means, show their workings, and share their conclusions. 

But even if you’ve made it this far, what should you conclude when you find articles with headlines like:

Training AI models doesn’t emit that much if we just make reasonable comparisons instead of crazy ones

vs

The Carbon Footprint of LLMs — A Disaster in Waiting?

We should all be able to make our own judgments on the subjective words (‘that much’, ‘reasonable’, ‘crazy’, ‘disaster’, etc.). 

We all have the right to prioritise what we consider our species’ ‘needs and wants’ should be. We can use our own scales to balance the benefits of making pictures of puppies in funny hats against the environmental cost of generating them.

In truth, however, few non-specialists will read beyond the headline.

But even those who can follow the maths must also consider what lies (which might be the operative word) behind the numbers.

The first rule of computing, after all, is ‘garbage in, garbage out’.

It’s just transparency

We can’t trust the numbers the chatbot-peddlars provide. 

The ‘hyperscalers’ seeking to convince us we ‘need’ their data factories know all these values. They just don’t want us to know the truth, because they:

  • are primarily motivated by profit
  • are insufficiently regulated
  • know the real numbers are even more shocking that we fear 

We know this because whenever pesky regulators force hyperscalers to cough up the real numbers, or disclose actual information, they’re always way worse than advertised:

One day such disclosures may reveal that a profit-driven hyperscaler has overestimated its public claims about its data centre emissions.

Until then, it’s safe to assume they’ll fudge, fib and obfuscate to make a buck. 

At the very least, we should urge our lawmakers to make them tell the truth, and punish them when they don’t.

And don’t ask any of them for directions to the nearest pub.

Whither AI?

Nothing apart from civilisational collapse can return the genie to its bottle.

For better or worse, we’re stuck with it. Big Tech’s lack of transparency makes its trajectory even more uncertain.

Our latest Masters of the Universe have a different story for everyone. They:

  • Proselytise to the public their passion to make our lives easier, promising a future of no work and all play. 
  • Assure investors their products are as revolutionary as fire, the wheel or internal combustion engine.
  • Placate regulators with non-binding assurances the tech is safe in their hands.
  • Convince their boards they’ll avoid the fate of previous asset bubbles. 
  • Demand their lawyers protect them from having their pants pulled down in public, so we can all see the size of their emissions.

While international finance plays out its familiar cycles of fear and greed, the tech moves on. Each week introduces a new plot twist:

  • a Chinese breakthrough in LLM training
  • a more powerful processor
  • another government hoping that hosting data centres will somehow make them ‘players’ in this Brave New World
  • another round of funding adding billions to a corporate war chest

Some things we can count on, however. Until legislators start dictating to tech billionaires, rather than the reverse, tech bro billionaires will remain:

  • indifferent to environmental concerns
  • unconstrained by morality
  • resistant to transparency 

Other forces will determine this technology’s future. 

Predominantly, but not exclusively, money.

Bubbles

AI is driven by capitalism, red in tooth and claw.

As of mid-2026, it’s still somehow defying the laws of financial gravity, but this is not a new story. It’s the same as all previous asset bubbles.

  • Businesses raise billions from investors before the bubble bursts.
  • Big investors know the bubble is due to burst, but reckon they can still make a lot of money first.
  • Small investors convince themselves they’re backing the right horse.
  • Everyone thinks they’ll somehow get out before the bubble bursts. 

As with speculators in all previous bubbles, they can’t all be right. 

Still, they fancy their chances in a financial system where reckless speculation is unlikely to leave them impecunious or imprisoned. 

Revolutions

Meanwhile, the AI revolution continues to unfold. 

The billions being raised now for new data centres are unlikely to actually end up all being spent on power-and-water-guzzling data factories. 

Not because there’s not enough money in the world to pay for the projected capacity. Because the survivors of the imminent LLM massacre may no longer need that amount of brute computing muscle.  

Revolutionary new energy-efficient chips may play their part in reducing the direst projections. Old-fashioned commercial reality and power politics may be more influential.

Chances are that most of the multitude of LLM rivals, each aspiring to become world-beaters, will soon run out of cash, be gobbled up, consolidated or otherwise whittled down to a handful. 

This culling is as likely to be driven by geopolitics and national security as by technical performance.

Fewer players will likely mean less duplication of expensive and carbon-belching model training. The remaining LLM competitors won’t need to train their models on the Internet over and over again, duplicating the power-hungry training process for their proprietary models.

As computing moves from LLM training to inference we’ll hear less about power-hungry data centres, and more about ‘edge computing’ taking place on our gadgets.

The ‘good’ news is that inference on your phone involves much less computing grunt than training LLM models in data centres. No more massive, tedious, repetitive, carbon-hungry stochastic gradient descents to teach robots what newborn infants can do instinctively. 

It’s unclear, however, whether distributing computing to smartphones and laptops will actually reduce emissions. It may:

  • Simply transfer the cost, and energy consumption, to the customer.
  • Obey Jevon’s Paradox and by being more efficient, supercharge demand resulting in higher overall consumption. 
  • Make the price of chatbot enquities boiling billions of teaspoons of water every second, instead of boiling a few lakes a day.

AI’s ethical, moral and existential questions won’t go away. They’re now fixtures in humanity’s ever-expanding suicide armoury.

But even if the worst projections of its direct ecological impacts recede, we’ve added a new source of energy demand at a time when we need to reduce our consumption.

How bad will it be? Timing is everything, but Why Waste Time In A Time Of Waste?

AI: good/bad robot?

So what can a pragmatic green activist conclude from this morass of ifs, buts and maybes?

For a start, outright rejection, however ideologically pure, is foolish.

Being ahead of the curve in concern about LLMs’ carbon impact has not meant the See Through network rejects any use of it. 

Resisting the prevailing Good Robot/Bad Robot, Skynet vs. Wall-E, casting, See Through picks a more nuanced path.

AI’s unregulated development and profligate misuse means we’re burning yet more fossil fuel to generate: 

  • misinformation
  • deepfake videos
  • celebrity memes
  • revenge porn
  • child abuse imagery
  • fake news
  • Bioweapons
  • cyberhacking
  • other products of our unfettered collective imagination.

We’re very much against that kind of thing in general, and against wasting valuable AI resources on it in particular.

AI’s positive applications, from detecting early cancer from X-rays to enabling cross-cultural communication, are well advertised by hyperscalers keen to accentuate the positive. AI will doubtless continue to add to the Good Robot side of the ledger, with everything from cancer cures to hot fusion.

Good Robot applications include solutions addressing emissions reduction – and other pressing existential threats.

Hammer, anyone?

Like any technology, from the wheel to nuclear fission or genetic editing, AI is neither intrinsically good nor evil. 

It’s a tool, ‘a hammer that can be used to tap in the final peg of a no-nail, zero-carbon eco-home, or to smash in a stranger’s skull’. 

AI companies, desperate to justify their investment and prove their worth, are indiscriminately handing out hammers like candy. Think of their annoying sommeliers as annoying hammer vendors, offering hammers to help you fry eggs or get a baby to sleep, if that helps.

For the moment, our governments appear powerless to limit our use of this new tech to such positive applications. Our Silicon Valley Overlords are only too happy to encourage us in our mistrust of being told when and how we can eat candy, or use hammers. It makes them lots of money.

Until this changes, we seem obliged to live with Good and Bad robots and cross our fingers.

And to do our best to know when it’s worth using an AI hammer when an emissions-reducing nail needs hammering.

How can AI help reduce carbon?

What might these carbon-busting nails look like?

Many See Through projects involve assembling large-scale open-source datasets designed , in some way or other, to speed up carbon drawdown. Some examples of projects that would benefit, or have benefitted, from AI:

  • The Magic See-Through Mirror: has been radically changed by LLM developments, each significantly lowering the bar to execution since conceived in March 2022.
  • See Through Carbon: carbon reporting ecosystem that could use LLMs to generate free bespoke carbon-reducing advice for small businesses who can’t afford human experts.
  • See Through Carbon Competition: distributed a donated $500K of data centre compute as its prize fund, crunching vast datasets to promote Global South carbon drawdown. 
  • How To Live Without Plastic: uses robots to transcribe interviews with old people around the world for a public database, which anyone can mine for a sustainable future.
  • See Through Together video content: created by humans using their imaginations, creativity and human resources, but edited using video editing software that includes growing numbers of automated features. See Through uses robots to save time by balancing audio, grading colour, isolating voices etc., but never to generate images, voices, music, graphics, text or stories.

In short, if we judge tech to be a useful tool to further See Through’s Goal of speeding up carbon drawdown, we use it.

See Through’s AI rules of thumb

So, after all those caveats and complications, how can responsible, pragmatic environmental activists know when it’s OK to deploy the robots, and when not?

  1. Data: do the cost/benefit calculations, seeking the best information and conversion factors available. If it comes out negative, find a positive project instead.
  1. Storytelling: don’t use a hammer to write a poem, compose music, create a graphic or tell a story. All See Through storytelling content will continue to be made by humans. It should show.
  1. Information: approach with extreme caution. Beware of asking robots for directions to the nearest pub. They might not have understood the question, or be hallucinating when they answer.

***

Other articles in this series:

For more See Through ecosystem Solutions to our polycrisis, visit www.seethroughtogether.org