Sign up to our newsletter

Welcome to See Through News

Speeding Up Carbon Drawdown by Helping the Inactive Become Active

[wpedon id=3642]

If Money Bursts AI’s Bubble, Can Climate Activists Profit?

AI bubble bursting technology RAG Agentic PID CO2 carbon dioxide drawdown money finance data centres

When scientists despaired at AI’s carbon footprint, tech bros, investors and regulators all shrugged. Now business bosses are realising they’ve been sold a computer-generated lemon and bankers reckoning data centres could cost the earth. If it takes money to puncture AI’s hubris, what can climate activists learn?

This article speculates Artificial Intelligence’s asset bubble will shortly burst, but declines to celebrate what’s likely to be no more than a wobble in the trajectory of man’s latest CO2 turbocharger. It asks what those who are serious about decarbonisation might learn from this multi-trillion-dollar gamble with our future.

Is the AI party over?

As of September 2025, the AI party still appears to be in full swing.

Rumours that the grown-ups are on their way to close it down, however, are starting to spread.

In July, MIT published The GenAI Divide: The State of AI In Business 2025. A few partygoers stopped dancing and retired to quiet corners to mutter above the dance music. 

Might it be time to get out quick, and go home while the going was good? 

The first line of the MIT report’s Executive Summary was particularly alarming for suppliers and customers betting their farms on Artificial Inetlligence.

Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.

The Three-Headed Beasts of Business, Government and Media are united below the neck by power and wallow in the Money Mire. Their perspective on the world is very different from the rest of us, scuttling between their feet to avoid getting crushed, as we forage for crumbs they’ve dropped. 

This tech party is mainly populated by Three-Headed Beasts. This is why rumours from MIT, the FT et al, that the new gold mine might be unprofitable hit such a nerve.

They got away with it with carbon credit rubber-stampers Verra. Verra is still in business, despite articles demonstrating that 90% of their products were fraudulent. In fact, the carbon trading market continues to boom, sailing past a trillion dollars. If emissions continuing to rocket didn’t damage carbon trading’s reputation, why should 90% of the biggest carbon trader’s products not working be a problem?

The Three-Headed Beasts at the party can call on their Media heads to help out. If any renewable energy company products were to have a 95% success rate, they’d ensure the public’s attention would be directed towards the 5% failure. It worked with ‘sceptical climate scientists’, after all. 

Eager to return to the dance floor, they might also reassure themselves that the MIT report wasn’t the first to point out that the tech bro Emperor was at best wearing a tiny G-string, if not totally naked.

But might this report be the one that stopped the party? MIT was a high-status source they often quoted when it suited them. The report seemed pretty thorough: 

‘based on a multi-method research design that includes a systematic review of over 300 publicly disclosed AI initiatives, structured interviews with representatives from 52 organizations, and survey responses from 153 senior leaders collected across four major industry conferences’.

The MIT authors anonymized the sources, but the loudest partygoers could all recognise themselves and each other behind the masks.

Reports throwing cold water on Artificial Intelligence’s 24-hour party people meant more muttering breaks, but they managed to shrug this one off too. The party continued.

In August, the FT published a deep dive called Absolutely immense’: the companies on the hook for the $3tn AI building boom’

It was behind the FT paywall, but all the partygoers all paid for this kind of high-quality publication. 

This article was a real party-pooper, because it was about money.

Three Trillion Dollars

The FT had spoken to people who’d looked at the projections and done the sums. Their consensus was that the price tag to make our Silicon Valley Overlords’ dreams real was $3Tn.

The tech bros prefer us to think about ethereal, cost-free ‘cloud’ software. It conceals the reality that AI actually happens in earthbound data centres. 

Data centres are massive boxes of computer chips that require as much power as a small town to run. The chips are so high-powered (or inefficient, depending on your perspective) that most of the energy they consume – not to mention vast quantities of water –  are just to stop them bursting into flames. 

A data centre doing no computing still requires 70% of the energy it uses at full computing capacity, just to cool its GPUs (Graphics Processing Units – the chips that do the computations).

How much do data centres cost? 

The article gave an example of a 2GW data centre being built in Abilene, Texas. Its backers raised $5Bn in equity and borrowed $10Bn from JP Morgan. That’s 15 billion dollars for one data centre.

Why did investors and bankers reckon a single data centre was worth $15Bn? This is, after all, the entire GDP of Congo, provider of the rare earth metals that make its microprocessors work. $15Bn is also the GDP of Brunei, supplier of the fossil fuels required to cool and power them.

The answer? A piece of paper. Specifically, Oracle’s 15-year-lease on the Texan data centre. That was all the security the money men wanted to start digging holes in Texas.

This figure puts MIT’s $30-40Bn figure of the amount business bosses have already spent on AI ‘solutions’ into perspective. That piffling sum was merely the total annual GDP of countries like Iceland, Honduras, Zimbabwe or Estonia. Or, we now know, 2-3 data centres. 

The FT article multiplied that price tag by the number of data centres on Silicon Valley’s shopping list, and totted up $3Tn.

Even for Silicon Valley Overlords, three trillion dollars (US$3,000,000,000,000, to give it all its zeros) is a humungous amount of money.  

$3Tn is the GDP of the world’s 7th-biggest economy, France. If you prefer to trade in  autocratic petro-states, Russia ($2Tn) and Saudi Arabia ($1Tn) combined.

The bankers in the FT article reckoned the Big Three ‘hyperscalers’ (Amazon Web Services, Microsoft Azure and Google Cloud) could maybe raise half of that from their cashflow. The rest would have to be paid for by ‘everything else’: private equity, corporate debt, securitised credit etc.

Money’s a funny thing. When spent on some things, like paying nurses or educating children, a billion is a lot. It can only be ‘responsibly’ raised by cutting other existing budgets. 

Yet when certain other needs suddenly appear – aircraft carriers, ineffective PPE products, bank bailouts – tens of billions are instantly found without cutting anything else. The Magic Money Tree only fruits for certain hands.

Funny though money can be, in rich mens’ worlds, it’s not entirely economically illiterate to say that $3Tn spent on data centres is $3Tn not spent on other more important things.  

Carbon drawdown, for example.

So Artifical Intelligence, in its current form, is a climate triple-threat. 

  1. It massively amplifies emissions exactly when we need to slash them
  2. Its applications are optimised to make money, not reduce carbon
  3. It sucks financial, energy and water resources, with only a vague promise of maybe coming up with some unspecified future solution

Keeping this party going requires a lot of wilful ignorance, cunning, and brute force. 

Suppressing three dirty secrets at once ain’t easy, and don’t come cheap.

AI’s Three Dirty Secrets

If all you’ve read about the subject for the past few years has been mainstream media’s fawning, breathless coverage of ‘the Artificial Intelligence Revolution’, its three dirty secrets may be news to you, but not to experts in three fields: 

  1. Computer scientists have long been aware of Secret 1
  2. Climate activists have long been aware of Secret 2
  3. Financiers and bankers have long been aware of Secret 3

The tech bros breezed past Secrets 1 and 2 experts. 

That’s why the muttering at the tech party is only growing louder because Secret 3 involves the thing they actually care about. Money.

We’ve just covered Secret 3. Before we look for climate activist lessons, here’s a re-cap on Secrets 1 and 2.

Secret 1: AI isn’t actually all that smart

In expert hands, Artificial Intelligence, and its sub-genres of Machine Learning, Natural Language Programming etc., is a useful tool for certain specific tasks. The example of outperforming experienced consultants in detecting early cancers from X-Rays is the usual example.

The X-Ray Robot is a good example of the technology being put to good use, even if it is quoted suspiciously often.

As disappointed businesses, lawyers, students and politicians are discovering, however, in most cases the tech is way less useful than those selling it claim. It is often infuriatingly, cockily useless for any task that demands more than linguistic polishing.

Here are three recent examples of failure experienced by the See Through Network. If you’ve tried using a Large Language Models (LLM) like ChatGPT for something other than re-writing emails or translating contracts, you may find them familiar.

Chat GPT Challenge 1: Coding 

Challenge: Two years ago, volunteers at See Through News (STN) laboriously hand-coded this simple interactive world map

It used basic HTML coding language to place clickable pins in the correct locations, with links to hundreds of Facebook Groups it administered globally. 

With its total global Facebook group membership recently passing a million souls, STN asked ChatGPT to update the map with new data. 

ChatGPT breezily asserted it could do it. It instantly came up with an impressive sequence of tasks describing how it would go about it. The LLM was excellent at superficial styling, even suggesting improvements like filtering and colour coding for group type. 

ChatGPT, in a matter of minutes, generated a map which it claimed matched STN’s requirements. 

At first glance, the map looked impressive, but closer inspection showed it had failed on the more critical task of placing the pins in the correct location (it put Burkina Faso mid-Atlantic, East Grinstead near Solihull etc.).

Each time STN pointed these errors out, ChatGPT admitted its mistake and asked for more detailed input (precise coordinates, not place names etc), before delivering an ‘improved’ version. Which had exactly the same, marginally different, or new, failings. Over and over again.

Result: FAIL. STN concluded it would be quicker to do it all again by hand. Though tedious, it would guarantee a successful outcome.

NB: this was not a failure of ‘vibe coding’. LLMs claim to be promptable via ‘vibe coding’, i.e. amateurs with no coding expertise can ‘simply’ describe the output they want, using everyday language, and let the robot do the rest. In this case, three veteran coders from the STN network checked the ChatGPT prompt history. They concluded their ‘professional’ prompts would not have made any difference in this case.

Chat GPT Challenge 2: Animation

Challenge: A creative team at See Through Together (STT) had written a dozen short scripts for a ‘Fabulous Fables’ Playlist, explaining climate change issues in simple language an 8-year-old can understand. 

The team had recorded voiceovers of children and adults narrating them. Much more complicated, and much better, than using robot-generated voices. Expert sound designers had mixed them with original compositions supplied by professional musician volunteers (much better than generic robot-composed music) to create a series of 2-3 minute soundtracks.

Human artists were unavailable to supply the images for the Fables, so STT gave ChatGPT the mixed audio file, the script and a particular artistic style to mimic (sand artist Ilana Yahav), and asked it to generate animation to match the story told in the soundtrack.

ChapGPT described a storyboard in impressive detail, including how Yahav’s style would appear on screen, and its emotional impact. When instructed to actually generate what it confidently and meticulously described it could do, ChatGPT didn’t. 

Time and time again, it either timed out, said it had run out of capacity and to try again tomorrow, or diminished the task to smaller and smaller durations. 

Even truncating the clips to 10 seconds, it repeatedly failed to generate a single usable frame.

Result: FAIL. Literally a waste of time. Nothing usable whatsoever – not even a single static frame.

Chat GPT Challenge 3: Fact-checking

Challenge: See Through Carbon (STC) wanted to fact-check the source of the aphorism from Sunzi’s ‘Art of War’, ‘Always leave your enemy a Golden Bridge to retreat across’. 

This 2,500-year-old military dictum from Chinese general Sunzi has been quoted by management consultants innumerable times in English. STC was preparing a slide deck for Chinese partners, and wanted to quote the original classical Chinese phrase.

Result: PASS AND FAIL. This turned out to be an excellent example of LLM’s strengths and limitations. It illustrates the critical role a human expert plays in both prompting and evaluating LLM’s output. 

If you ask ChatGPT ‘Who said ‘Always leave your enemy a Golden Bridge to retreat across’?, it replies:

‘The saying “Always leave your enemy a golden bridge to retreat across” is attributed to Sun Tzu, the ancient Chinese military strategist and author of The Art of War (c. 5th century BCE). The idea appears in different translations of The Art of War, usually in the chapter on maneuvering or strategy. The principle is that when an enemy is cornered with no way out, they will fight with maximum desperation. By leaving them an escape route (a “golden bridge”), you avoid provoking a fight to the death and can control the battle more advantageously.

You might reasonably think this is the definitive answer.

However, if your prompt is ‘What is the original Chinese reference for ‘Always leave your enemy a Golden Bridge to retreat across’?’ ChatGPT gives a different answer:

The saying “Always leave your enemy a golden bridge to retreat across” is usually attributed to Sun Tzu’s The Art of War, but the wording most people know in English is actually a paraphrase. The closest original Chinese is in The Art of War, Chapter 7 (軍爭, “Maneuvering”):圍師必闕 wéi shī bì què “When you surround an army, leave an outlet.”

A non-Chinese-speaker might then think that ‘圍師必闕 ‘ is a Chinese translation of the ten English words in the quotation, and confidently copy-and-paste is into a slide deck as a direct translation of ‘Always leave your enemy a golden bridge to retreat across”.

Fortunately, STC’s Chair of Trustees happened to have a degree in Chinese. He pointed out that none of the four characters in 圍師必闕 means either ‘Golden’ or ‘Bridge’. The oft-cited English phrase is actually a very loose later interpretation of the original Chinese, not a translation as such. Native Chinese speakers unfamiliar with this English interpretation would be baffled by a slide claiming they were a translation.

In other words, ChatGPT was fantastically quick at finding the ‘right’ answer, saving a laborious search of the original classical Chinese text. But only if you knew the ‘right’ question to ask, and how to check the ‘right’ answer.

One of the See Through Network’s AI experts explained it like this:

In the linguistic sphere, LLMs are pretty good at summarisation and translation.  They are also relatively good repositories of commonsense knowledge, albeit implicit. They’re not good at handling tasks where ambiguities need to be resolved to get the right output, or where reasoning is needed. So question answering performance is poor, for instance (though Google and others use it for this purpose).

So is Artificial Intelligence actually ‘intelligent’?

Defining computer ‘intelligence’ has always been slippery. Understanding what LLMs actually do really helps.

LLMs like ChatGPT, DeepSeek, Gemini, Grok etc. are highly sophisticated predictors of the most likely next word (or, technically, ‘token’) in a sentence. When it works, the results can be indistinguishable from human intelligence.

But these word-emitting machines, programmed to generate plausible-sounding outputs based on a statistical analysis of whatever it has been ‘trained’ in. Their trainer, in most cases, is The Internet. 

If you grew up knowing the ‘old’ computer adage of ‘garbage in, garbage out’, the same principle applies to LLMs. One of the See Through Network’s tech advisors explain it like this:

If you are trying to get out something more than what was put in, like truth, emotion, meaning.  LLMs work purely on form. If you are happy with an output that corresponds to plausible linguistic form, all is good!  The problem isn’t the LLM, it is the applications to which they are put. 

This is why understanding how Artificial Intelligence works is so critical to any debate about whether it is good/bad, or works/doesn’t work.

To be clear, LLMs are very good at certain things. Here are some examples from the same computing advisor:

LLMs are good at what lots of people use them for, e.g., “summarise this paper”, or “give me a recipe that uses walnuts and cheese”, or “translate this sentence into Swahili”. They are good at some (restricted) applications and really terrible at a vast swathe of applications that google etc are hoping to exploit with AI (such as question answering and search).  

A hammer ‘works’ if you want to drive in, or extract, a nail. That is what it was designed for. A hammer doesn’t ‘work’ if you want to brush your teeth, or groom a puppy. (Let’s set aside for the moment the matter of cost, whether measured in dollars or carbon emissions, and whether using the technology is good ‘value’).

AI researchers, despairing at the overselling of LLMs, often quote Professor Emily Bender’s description of LLMs as ‘stochastic parrots’. AI, Climate Change and Monkeys Climbing Trees To Reach The Moon explains why this is such a killer put-down to the Silicon Valley hype-merchants.

See Through News published the Monkeys Climbing Trees article in November 2022, the month ChatGPT was released, in case you think this is the wisdom of hindsight.

How long can the tech bros keep the AI plates spinning?

So how can LLM monetisers keep the AI party going?

One way is to convince everyone that the technology is a fait accompli, already an essential part of all modern life. This is why little LLM helpers are appearing everywhere you use the Internet, even when they’re pretty useless for the task.

This is an old – and expensive – marketing strategy. To get a country hooked on your fizzy drink, fast food, or mopeds, you plaster billboards everywhere, and flood the streets with Coke, MacDonalds or Hondas.

As for the computer scientists banging on the windows, calling them ‘stochastic parrots’ and pointing and laughing at them on the street, the tech bros have managed to ignore them, so far. They tell the DJ to turn up the music, and play remixes of old bangers.

These remixes are presented as original recordings. When investors get bored of your first hit, and customers no longer like it, you need to keep your offering fresh. Re-package old material and label it ‘Agentic AI’. When that stops working, call it ‘Retrieval-Augmented Generation (RAG)’. When that gets old, start raving about Proportional-Integral-Derivative (PID).

Like all jargon, these buzzwords are designed to distract the uninitiated from fundamental problems. They are the computing equivalents of Derek Zoolander’s iconic male model ‘looks’: ‘Magnum’, ‘Le Tigre’ and ‘Blue Steel’ are all actually the same.

None of them can cover up the industry’s big problem. It’s self-inflicted, and it gets worse as time goes on. 

Remarkably, we’re not talking about the greenhouse effect this time. 

AI’s Big Problem

Artificial Intelligence’s fundamental problem is the deeply flawed nature of LLM’s ‘data training set’. 

The Internet was already flawed when ChatGPT came out, because flawed humans wrote it. Robots churn out sexist, racist, ignorant guff because they’re learned from the worst.

But LLMs have created a whole new problem. 

Post-ChatGPT Internet is polluted by more and more robot-generated ‘AI slop’ created by LLMs. Robots can’t distinguish their own output from human sources. Every new LLM-written robo-article or deepfake image further corrupts the data training set.

Hence the need for ‘this-time-it’s-different’ jargon to convince investors and customers the party is still on.

Dr. Mark Drummond, See Through Carbon AI & Strategy Advisor, whose CV stretches to programming NASA’s Mars Rover autonomous vehicle in the ‘80s and developing Siri in the ‘90s, speaks more plainly.

RAG is ‘just search applied to the results of an LLM’.

Or even more pithily,

‘A search turd rolled in LLM glitter’. 

Like children coming up with increasingly desperate excuses for not having done their homework, the Silicon Valley Overlords are running out of road.

Selling their products as universal panaceaa means they have had to keep coming up with new jargon, in the hope they’ll stave off punishment from indulgent and credulous grown-ups.

The ones who are making them rich.

Secret 2: AI belches carbon

Data centres, or more accurately the chips they house, are immensely power-hungry and water-thirsty.  

As early as August 2022, months before ChatGPT launched, Semiconductor Engineering published another article the tech-boosters managed to entirely ignore or dismiss.

Semiconductor Engineering is a professional publication for the chip-makers who design and manufacture the microprocessors that consume all that energy and water. They’d done their calculations, and were already sounding the alarm.

Machine learning is on track to consume all the energy being supplied, a model that is costly, inefficient, and unsustainable.

Mainstream media, old and new, were too dazzled by ChatGPT’s pyrotechnics and razzmatazz to listen to such party-pooping talk from the caterers.

See Through News quoted this panicky appeal for someone, anyone to regulate the industry’s exponential growth in its May 2023 article Computing’s Carbon Footprint- the Other AI Threat.

Mainstream media has started mentioning Secret 2, but usually as a side-bar, or a by-the-way buried in paragraph 5.  

Many still buy the greenwash that the energy used for these huge new data centres is, or will be, ‘renewable’. Most ignore their vast thirst for increasingly inaccessible water.

It’s hard to know whether this is down to naivety, ignorance or corruption, but every article that doesn’t mention these inconvenient truths helps suppress them.

The vast majority of these data centres, existing and in the pipeline, are powered by burning fossil fuels.

No one apart from the hyperscalers knows the precise number, but this tells its own story. 

We can safely assume that if this $3Tr worth of new data centres were all powered by sun and wind, the hyperscalers would happily disclose such information.

Secret 3: AI is a bad investment

You all now know this secret.

Unlike Secrets 1 and 2, however, Secret 3 is the one the hyperscalers most fear getting out.

Companies built on money, designed to make more money, have created an insatiable need for even more money.

The fact that there may not be enough money in the world to fund AI’s projected expansion is now leaking out. 

  • Have those muttered conversations killed the party?
  • Have grown-ups showed up to turn on all the house lights?
  • Is the DJ packing their kit into their car?
  • Has everyone gone home to sober up, and reconsider their futures?

No. Not yet, but it’s only September 2025.

Is that it?

So far, in what billionaires like to call ‘the real world’, the MIT report, the FT article, and all the other drip-drip party-poopers have done nothing to retard the industry’s boom. 

  • Chip maker share values remain at astonishing multiples of their current revenues. 
  • Companies continue to ‘remodel’ their business to be ‘competitive, even as they re-hire some of the human staff they fired as ‘prompt engineers’ to tell the LLMs exactly what to do and as ‘hallucination testers’ to check their output for fibs and fantasies.
  • Governments persist in clamouring to get as many data centres built on their turf as possible, and worrying about keeping the lights on later.

Our Silicon Valley Overlords have managed to style out any awkward questions about overselling their tech. So far:

  • the tech giants have their shared interest in keeping the AI party going over their competition for market share.
  • their sales guys have successfully swatted away boffin accusations of being nothing more than ‘stochastic parrots’. 
  • invoking jargon has kept their corporate clients on the hook. They have plenty more up their sleeve for the next investment round or sales call when the last one fails. They’re the Emperors’ defence against small boys in the crowd shouting out ‘They’re just stochastic parrots’, ‘They’re only ‘search turds rolled in LLM glitter’, and other Secret 1 zingers.
  • ‘environmentalists’ asking awkward Secret 2 questions about the restricting demand have been ignored. Anyone betting our future on infinite ‘growth’ on a planet with finite resources needs this cognitive dissonance to maintain their hero status in their own narratives. 

Anyone questioning whether giving everyone with internet access the freedom to generate infinite numbers of pictures of puppies in funny hats, write exam papers, mark exam papers, write legal arguments, compose pop music, write political speeches etc. won’t even get past the bouncers.

Questioning Demand would really kill the party. Tell the DJ to keep playing the Supply hits.

Eric Schmidt, ex-Google boss and superstar Artificial Intelligence DJ, is playing his part. Schmidt is digging the money groove far too much to be worried about the house burning down around us.

Questioned about the industry’s carbon footprint in late 2024, Schmidt literally invoked gambling.

I don’t think we’re going to hit the climate goals anyway because we’re not organized to do it. Yes, the needs in this area will be a problem, but I’d rather bet on AI solving the problem than constraining it and having the problem.

So far, so good, as the guy said to the people on the second floor as he plummeted from the 100th floor.

That’s the problem with asset bubbles. Everyone has a great time until the music stops.

Does this mean the robots will no longer save/destroy the world?

For anyone concerned about avoiding self-inflicted civilizational collapse, this is the wrong question. Yet this is the framing that has, so far, dominated public ‘Artificial Intelligence debate’. 

Climate activists using storytelling to speed up carbon drawdown must figure out smart ways to change the subject.

First, understand the problem. Since ChatGPT triggered the LLM boom, public discourse has almost entirely focused on the binary question of whether the robots will save us, or destroy us. 

This framing suits all tech bros. We know this because they’re always using it, and it does nothing to stop them making money.

  • Schmidt pitches the tech as a benevolent-R2D2 droid. He’s OK with gambling humanity’s future if it might increase his $27Bn net worth. 
  • Sam Bankman-Fried, before being rumbled as an $11Bn con-man, aided by his Effective Altruism stooges, agonized over the dystopian Skynet Terminator ‘bad robot’ version, which may declare war on humans. 
  • Elon Musk lives out his Iron Man fantasies unrestrained, whether trolling the President in the Oval Office, or launching rocket ships as he dreams of moving to Mars with his $415Bn.
  • Crypto-billionaires, peddling an even more imaginary money than the fiat currencies they seek to replace, proliferate, as do the emissions from the computations required to keep their crypto-plates spinning.

This ‘debate’ is conducted, and confected, between billionaires. The media, much of which they now own, uncritically reports and boosts their pronouncements. What they say flips between techno-jargon and referencing childhood science fiction heroes. They make hand-wavy claims their LLMs will figure out a fix to global heating, while wilfully ignoring their own disastrous climate impact. 

We take such nonsense seriously, because the people spouting it are seriously rich. If money talks, they hold the bullhorns to preach their ‘real-world’ gospel.

But beyond their super-yachts, ranches and skyscrapers, the rest of us live in a different reality. We lack billions to insulate us from human-induced climate change. 

Our ‘real world’ involves unprecedented storms, floods, fires and famines. The poorest get hit first and worst.

The right question

The more important question is ‘Should we keep pouring more kerosene on our raging house fire?’. 

This is the story climate activists should be telling. If Secret 3 works best, keep pulling the Money lever.

Smart climate activists in other fields have found ways of leveraging Money-talk to promote carbon drawdown, so emulate them. Shareholder activists, climate justice lawyers, and non-greenwash ‘impact investors’ all use Money-talk to good effect. 

It’s still important to use everyday language to explain it to the non-moneyed, but the shortest path to a sustainable future, probably also requires climate activists to learn the dialects spoken by the moneyed.

Climate action lessons

We already know appeals to conscience, science, or thinking about their grandchildren don’t work on those holding the biggest levers of power. Ask Greta.

If the person you’re speaking to views the world through Money Goggles, trying to pull them off is futile. People can only choose to do that themselves.

How would you respond to some stranger shouting at you in a foreign language while poking you in the chest? If enough people shout and poke at the same time, this can work, but it’s a high risk strategy. The climate crisis is too urgent to put all our eggs in that basket.

Ten tips for an alternative approach. 

  1. Sidle up to the Money-Goggled.
  2. Stand shoulder to shoulder with them.
  3. Chat in a friendly tone about That Thing Over There you can both see.
  4. Find points of agreement. 
  5. Talk in language they understand: ‘investment risk’, ‘insurance premiums’, ‘return on investment’.
  6. Avoid dropping the C-Bomb, where C= ‘climate’, ‘carbon’ or any ‘green’ trigger words that will make them stop listening.
  7. Be well-informed, with all the right statistics, jargon and references at your fingertips. 
  8. Don’t shout. 
  9. Sometimes whisper, in case the others might hear and steal their lunch.
  10. Don’t take credit, encourage them to think it was all their idea. Once carbon reduction action makes them the heroes of their own narratives, they’ll do your proselytising for you.

Be encouraged that 80-89% of all the people on the planet want their governments to do more to address climate change. 

Where now for AI and the climate?

What are the climate activist lessons this still-unfinished, cautionary tale?

The volume of Artificial Intelligence-related ‘debate’ following ChatGPT’s November 2022 release is so vast, future historians will, ironically, have to rely on robots to trawl through it all. 

Smart researchers will need some expertise to get their robots to search for voices sounding the alarm, and ignore all the hype. For the past three years purveyors of Secrets 1, 2 & 3 have been drowned out by tech cheerleaders and festival barkers.

They’ve flooded the Internet with their hubris. Future historians will have to filter out the huge volume of boasts about:

  • raising billions from Venture Capital funds
  • tens of billions of LLM snake oil sold to greedy, gullible companies
  • the trillions in capital to build their version of Brave New World 

This is the version where they hold all the levers, set all the rules, and make all the money. The utterly unsustainable vision that future historians won’t be living in.

Even billionaires can’t permanently suppress inconvenient truths. 

Their customers, after trumpeting to their shareholders that their LLM investments would keep them competitive, are discovering it has failed 95% of the time.  

After firing all those pesky, sweating, inefficient human staff, who quit, get sick, insist on sleeping and don’t work weekends, they’re sheepishly having to hire some of them back. It turns out that shiny, self-polishing LLMs don’t actually do a million times more work a squillion times cheaper. Businesses need humans to make them work properly, even when the goal is making money.

Technology is never a human-replacement, but a new and fancy labour-saving tool. A tool that definitely has a role in carbon drawdown – if used responsibly and sparingly.

The chainsaw was bad news for axe-felling specialists, but forestry still needs human expertise to work out which trees to fell when, and what to do with them afterwards. 

Above all, humans need tree products, and robots don’t.

The challenge for climate activists is to amplify the voice of the 89% to quell the self-interested, money-centred, bleatings of tech billionaires who increasingly hold the levers of power.

The end of the Artificial Intelligence party, wherever it happens, will be an opportunity, as people try to figure out what happened.

It won’t be easy. Vested interests will play the usual gaslighting cards (prioritising ‘growth’, plunging pension funds, magic bullet gambles etc.), so be prepared for that.

Different audiences need different ways of sending the same message.

The party was fun, for some, but let’s focus on putting out the fire now.

Can AI be part of the climate solution?

Claims that AI is ‘as revolutionary as electricity’ are dubious not just because it has yet to prove itself as versatile, but also because these new data-centre housed robots are yet to address the problem of powering themselves.

To be as useful as electricity, the robot-vendors first have to figure out to make them work without accelerating the collapse of human civilization.

Electricity became ubiquitous in an age when coal was wrongly seen as an infinite resource and the environmental costs of burning fossil fuels were not included in any cost-benefit calculation.

The current boom stage in Artificial Intelligence’s development is based, it seems, on the same assumptions, even though all the apparently clever people pushing it know both assumptions to be false. So much for human intelligence.

It appears the only grown-ups the tech bros we’ve allowed to run the party fear or respect are the Money Men. Time will tell if both borrowers and lenders manage to moderate their greed to make money with their fear of losing money.

It’s possible to come up with the right decision for the wrong reasons, but the sooner we start valuing carbon as much as money, the shorter our path to a sustainable future will be.

This new tech is just a tool. We need all the tools we can gather to address the single most important issue of our, and future generations – mitigating, stabilising and reversing the worst impacts of human-induced climate change.

We can use LLM hammers judiciously to be part of the solution, or we can continue to act as if there were no climate cost to this ‘cloud solution’. Either way, ‘carbon don’t care. Atmospheric physics is indifferent to our rationalisations, and will keep on reacting to our actions in the same way it has since the Industrial Revolution.

So, how can we use Silicon Valley’s fancy new hammer for carpentry, and not puppy-grooming? Can it be used to douse the flames, instead of fanning them?

The See Through Network Case Study

The See Through Network, a global network of experts from a wide range of expertises from computing to storytelling with the shared Goal of ‘Speeding Up Carbon Drawdown by Helping the Inactive Become Active‘.

  • The Network’s storytellers have come up with uniquely innovative ways to convince business owners its in their interests to accurately report their carbon footprints transparently. They’re even worked out a way to convince the most sceptical participants of all, farmers.
  • The Network’s social media team has developed a global network of community-based petri-dishes to directly access its target businesses, with a reach of more than one million people.
  • The Network’s carbon accountants have developed a series of seven Pilots, covering as challenging a range of technical challenges as possible, to create a gold-standard, accurate, comprehensive carbon footprint calculator for any business sector, scope and scale.
  • The Network’s IT and data visualisation experts have designed a data schema, calculator, and user interface to maximise accuracy, transparency, utility and integrity.
  • The Network’s computer scientists, some of whom have been quoted in this article, are focusing how AI can help, and not hinder, their goal.

The latter have specified three ways the project can use AI.

1 and 2 speed up otherwise sluggish or highly demanding human processes. 3 helps others reduce more carbon faster.

1 Remove IT bottlenecks

Automated software development, with the right LLMs being used by the right hands for the right purpose, is emerging as an area of authentic AI usefulness.

See Through Carbon, the Network’s accurate, free, open-source, transparent carbon footprint reporting ecosystem, needs to build a functional database for it’s seven Pilots.

The Network has the right human prompt engineer experts to create a Product Requirement Description (PRD) good enough to use the latest coding LLMs to generate prototypes, and a different set of the right human hallucination detectors to test these prototypes to destruction and identity revisions to the PRD to fix glitches.

Because the LLM can produced revised prototypes in a matter of seconds or minutes, these iterations happen much faster. The sooner the See Through Carbon Pilot database is developed, the sooner it can be populated with real-world data in an open-source database, and help regulators and businesses perform carbon reporting functions that are not based on wild AI guesswork.

Good robot, good application.

2 Automate bespoke carbon reduction advice

Big businesses, if they care about reducing their carbon footprints can afford human expert carbon consultants to do a deep dive into their operations and give them expert, bespoke advice on where and how to reduce the most emissions fastest for their particular business.

For decades this was largely a performative branch of PR, a marketing opportunity to use greenwash, and elaborate ‘offsetting’ schemes, to make false claims of carbon neutrality which defied the laws of physics.

New regulation, like the EU’s Corporate Sustainability Reporting Directive (CSRD) and China’s new carbon reporting laws, among others, are turning up the pressure on businesses to demonstrate and document measurable emissions reduction. These new rules use carrots, like limited-duration cap-and-trade carbon tax credits that carbon-reducing companies can trade at a profit with carbon laggards, and sticks – like financial fines for non-compliance.

Big businesses, however, only account for only 30% of total global emissions. The vast majority are emitted by Small and Medium Enterprises (SMEs) that can’t afford to pay human specialists.

With the right data training set (see below), LLMs can be used to automatically generate targeted carbon-reduction advice for any business supplying its emissions data to the See Through Carbon database. The service is free, but comes as the ‘cost’ of a business making its carbon footprint as transparent as its annual financial statements.

See Through Network’s AI experts postulated a similar system (‘The Magic See-Through Mirror’) in 2022 before the technological breakthrough’s pioneered by ChatGPT. What was then a massive technical challenge is now more attainable by the day.

3 Create an essential, unique, open-source data-training set

The new carbon reporting regulation closes one greenwashing loophole. Businesses no longer get to make up their own carbon accounting rules, and are now mandated to including their ‘Scope 3’ non-energy indirect emissions.

‘Scope 3’ includes all the embedded energy in supply chains, and disposal after sales, i.e. all the off-premises emissions that would not have been emittted had that business not existed.

As Scope 3 accounts for 80-90% of many businesses’ actual, non-greenwashed, carbon footprints, this is a huge step towards treating carbon accounting with the same level of seriousness as financial accounting.

Most companies, fearing it might lose them money, have responded by lobbying to dilute, delay or diminish these regulations. Some more visionary companies, seeing there’s only one direction of travel unless the laws of atmospheric physics change in our lifetimes, have embraced it, as a way of making more money.

Both, however, face the same dilemma, the ‘SME Paradox’. To meet their legal obligation to calculate their Scope 3, big businesses need accurate carbon footprint data from the SMEs in their supply chain, who are under no such legal obligation and lack the money to pay human professionals to do expensive carbon audits.

There’s no shortage of carbon consultancies offering AI-driven ‘solutions’, but they all face the technology’s fundamental ‘garbage in, garbage out’ problem. There is no accurate, granular, consistently-tagged and well-taxonomised data training set to point their LLMs at. Train a robot on inaccurate data, and its guesses become wilder and wilder, with no basis for course correction.

By being an open-source, transparent database of real-world SME carbon footprints, categorised by business type, the See Through Carbon provides this data training set for free.

***

To join See Through Network’s global team of pro bono contributors, email volunteer@seethroughnews.org