Sign up to our newsletter

Welcome to See Through News

Speeding Up Carbon Drawdown by Helping the Inactive Become Active

[wpedon id=3642]

AI’s Dirty Secret Remains (Un)Safe

AI dirty secret computing carbon drawdown safety summit Elon Musk Rishi Runak

Global Artificial Intelligence Summit does nothing to lift the curtain on computing’s deeply inconvenient truth – its exploding carbon footprint

Imagine a global conference on aviation, cars, agriculture, construction or any other field of human endeavour with a global impact that generates greenhouse gases that didn’t talk about carbon emissions. Inconceivable? We just had one.

AI Safety Summit

Even amid the tumult of the latest Middle East crisis, the havoc caused by the latest extreme weather events, and the myriad other headlines competing for our attention, the UK’s AI Safety Summit managed to have its moment in the limelight.

For a couple of days at the beginning of November 2023, the world’s Artificial Intelligence, business and political elite gathered to stroke beards, furrow brows, stake out positions, debate regulation and stage press conferences.

To buttress Britain’s claims to be a world leader, the summit was held at Bletchley Park, the WW2 code-breaking HQ where the first electronic computer was assembled.

Alan Turing’s Colossus Mk 1, famously broke the Nazi’s ‘unbreakable’ Enigma machine code, arguably winning WW2.

Inarguably, Colossus sparked the digital revolution that has brought us to international summits on computing safety.

Dangers abound

There is, as we learned, plenty to worry about. 

Forget about computers passing the Turing Test, and humans finding it impossible to spot a they’re talking to a robot, things are already even worse. Computers can’t tell the difference between content lovingly crafted by humans, and instantaneously generated by Chat-GPT.  All that stands between us and AI breaking the internet is relying on good chaps to play by the rules.

But there are no rules, which is why we need a Safety Summit.

Even if you’ve only been scanning the headlines, you’ll already be familiar with these Good Robot/Bad Robot narratives, which long pre-date Artificial Intelligence, Machine Learning, Natural Language Processing, or even computers. 

We’ve been playing out our fears and fantasies in Hollywood movies since the first depiction of a robot in the 1918 Harry Houdini vehicle ‘The Master Mystery’ set the template, with its candelabra-carrying robot ‘The Automaton’.

This time it’s different, chorused Big Tech bosses and the politicians who nominally hold the power to regulate them. At loggerheads during the evolutions of social media, they both solemnly declared they’d never allow the same disastrous, Wild West free-for-all again. They publicly declared their common commitment to regulation. In principle.

China, cast by many in the West as tech villains, even sent a delegation to reassure everyone they’re just as worried as the rest of us. Behind the scenes, in Silicon Valley and Beijing, and beyond, hi-tech companies furiously vie for start-up funding, seeking first-mover advantage.

Could be worse. At least everyone appears to agree on the nature of The Problems, even if we’re a long way from nailing down the devil in the detail of what any global regulation might look like.

The Problems addressed by the summit were all speculative future problems.

What did it have to say about its real, current problem?

Zip.

Don’t Just Blame The Media

The Safety Summit headlines and images featured big shots, none bigger than tech bro superstar and Overlord of Silicon Valley Overlords, Elon Musk. Musk duly dominated the headlines.

It would be reasonable to blame this on superficial, headline-hunting hacks, with attention spans even shorter than the audience to whom they pander, being dazzled by celebrity stardust. 

Reasonable, but, in this case, unfair. The politicians attending the summit, and their civil servants and advisors whispering in their ears, were just as bad.

All of them – the politicians, tech bros, regulators, academics and journalists – ignored the massive, flatulent elephant in the room: computing’s carbon footprint.

Look at the summit programme: 2 days, 8 sessions, multilaterals, bilaterals, expert groups, not a single session, even a memo, background briefing or sound bite, addressing the issue. 

What garnered the most attention of all, was the concluding interview conducted by British Prime Minister Rishi Sunak, who fancies UK chances of becoming the emerging industry’s global ‘referee’, with Elon Musk. 

Sunak & Musk – what did we learn?

The interview of Elon Musk, CEO of Twitter, Space X, Tesla and The Boring Company, by British Prime Minister, and Safety Summit host, Rishi Sunak, got blanket press coverage.

The spectacle of the leader of the world’s 6th biggest economy, spending an hour lobbing softball questions to the Richest Man In The World was undeniably newsworthy, though not for the reasons the PM’s press team may have hoped.

Instead of cementing Britain’s aspirations to be AI’s global ‘referee’, the interview provoked widespread incredulity and ridicule. 

Maybe that was why the PM’s minders only allowed business leaders to ask questions, while journalists looked on in mute disbelief.

Some commentators were frank in their astonishment at the PR spectacle of the British Prime Minister channelling his inner morning TV presenter interviewing a Hollywood A-lister.  

Sky News veteran Sam Coastes memorably billed it as ‘one of the maddest events I’ve ever covered’. Many others questioned the surreal spectacle of national leaders playing supplicant to Silicon Valley Overlords, while tech giants begged governments to regulate them.

But this could be dismissed as ignorant media froth by embittered journalists, miffed at being excluded from the main event.

What of the actual substance of the many AI dangers raised by Musk, the PM and the entire 2-day Summit of players, regulators and interests?

Don’t think of humans being Biofuels for The Matrix

On national radio the following morning, British computing pioneer and guru Dame Wendy Hall was asked what she thought of the show. 

She laughed. Musk was ‘quite mad in many ways’, she said, after the BBC presenter asked her to respond to a highlight reel of Musk’s pronouncements, which included that AI will:

  • one day be a ‘magic genie’ granting limitless wishes. 
  • mean the end of work – ‘there will come a point where no job is needed, the AI will be able to do everything’.  
  • mean we won’t need a universal basic income, but a ‘universal high income’ (the PM didn’t ask him how this would be funded).  
  • present an ultimate challenge to mankind, to find meaning in life once we have no jobs.
  • ‘know you better than you know yourself’.
  • help children who don’t have friends. 
  • be ‘the most disruptive force in history’, and ‘smarter than the smartest human’.

So what did Dame Wendy make of Musk’s sound bite buffet of planned and off-the-cuff bon mots?

She rejected the idea humans would ‘just become the biofuel for The Matrix’, while managing to put the very thought into listeners’ heads. Dame Wendy reckoned many of Musk’s tech worries won’t bother us for some time yet, quoting another AI sage who said ‘not in my lifetime’ (Dame Wendy was born in 1952).

What did she say when the BBC presenter asked her the killer question about computing’s dirty secret, its carbon footprint?

We don’t know. He didn’t ask.

What was NOT being said?

Musk is an experienced and adept showman. Like Trump, he plays the media like a fish. Unlike Trump, he’s the richest man in the world and owns Twitter.

The richest man in the world hardly needs the money, but of course the more Twitterspats he provokes, the more eyeball attention he can sell to advertisers.

Casting himself, unchallenged, as humanity’s enlightened saviour/genius/oracle does Musk’s ego, and stock price, no harm either.

The content generated by his high-status interview enriched one of his stable of companies, Twitter/X, while cementing his ‘planet-saving’ chops with his other businesses, electric vehicle giant Tesla, and drawing a veil over the planet-abandoning dreams behind one of his other properties, Space X.

But there’s a deeper game being played here, which Dame Wendy did explain. 

They’re pushing this hype cycle because it brings investment to them. They’re scaremongering and they want to be regulated because it’s ‘not their fault’, but of course they’re all competitive, and will drive for the best technology, which will drive AI farther and faster. 

We have to think about not just what the technology is doing, but what do we want with AI? What sort of society do we want to build, and how can we make sure the technology that’s developed is used for the good and not for the bad. These are really difficult, complicated questions.’

Sure. But what about the really difficult, complicated question no one was asking, about AI’s dirty secret?

Remove Money Goggles, what do you see?

What on earth could an interview between a prime minister worth £730Bn and the world’s richest man, have skated over?

What little detail might have eluded an interviewer with a net worth twice that of King Charles, and an interviewee with a net worth currently somewhere between the GDP of Portugal and Iraq? 

What inconvenient truth might have slipped the minds of an ex-Goldman Sachs hedge fund manager and a man who not only dreams of reaching Mars, but has his best people working on it?

Might a certain awkward matter not have immediately occurred to the former deputy Prime Minister, now face of Facebook, Nick Clegg, as they all hobnobbed with venture capitalists, bankers, investors and fund managers?

Is it more than snide sniping to observe that Money may have had something to do with the total absence of computing’s carbon cost from the official agenda, press conferences, and media coverage?

AI’s Dirty Secret – its carbon footprint

Trawl through all the programme items on the 2-day summit, listen to all the broadcast coverage, read all the think-pieces, op-eds and editorials on the Summit, and you won’t find the following facts even referenced:

  • The Internet already generated 4.5% of global greenhouse gas emissions, more than twice the 2% emitted by aviation.
  • Large Language Models (LLMS), the tech behind ChatGPT and its imitators, has produced a massive spike in demand for data centre processing, which generates nearly all computing’s carbon footprint, similar to the invention of the internal combustion engine.
  • All that ‘cloud’ computing doesn’t actually take place in the clouds, but in earthbound data centres that require as much power as a small town even when idle, just to stop their processors from overheating. 
  • Many data centres have their own power plants, to stop the lights going off in nearby towns, and from their chips from bursting into flames. They advertise the ones with renewable energy. Most use fossil fuel.
  • Nvidia, with an 87% market share of the chips that need all that power to cool them, produces new designs optimised for performance, with ever higher power demands.

Substitute ‘car industry’, or ‘agriculture’, for ‘Artificial Intelligence’, and imagine a global industry summit for anything else that would be allowed to get away with ignoring such a massive, flatulent, sooty elephant in the room.

  • How many people at the summit stand to make money from mentioning AI’s dirty secret? 0%
  • How many stand to lose money by restricting the technology’s growth on environmental grounds? 100%

Some academics are starting to cough politely, and mention the additional power demands that a ChatGPT interaction has over a regular Google search (this Dutch researcher reckoned it was the same amount of power as Ireland consumes in a year) but even these miss just how bad the problem is.

The search results are relatively trivial compared to the huge additional computing demands all those proliferating AI start-ups require to train their algorithms.

See Through Cassandra

All the facts mentioned above come from a deep dive article See Through News published called Computing’s Carbon Footpring: the other AI threat.  They are not contested by anyone serious. They just prefer not to mention them.

The article also includes a series of quotes from one of the very few other articles you can find about this issue online. 

It’s from Semiconductor Engineering (strapline: ‘Deep Insights for the Tech Industry’), a forum for the engineers whose job it is to actually make the chips that power the data centres that gobble up all that energy. 

In the summer of 2022, as the ChatGPT tsunami was breaking, the magazine published an article headlined ‘AI Power Consumption Exploding’. It gets straight to the point:

Machine learning is on track to consume all the energy being supplied, a model that is costly, inefficient, and unsustainable. To a large extent, this is because the field is new, exciting, and rapidly growing. It is being designed to break new ground in terms of accuracy or capability. Today, that means bigger models and larger training sets, which require exponential increases in processing capability and the consumption of vast amounts of power in data centers for both training and inference. In addition, smart devices are beginning to show up everywhere. But the collective power numbers are beginning to scare people. 

The Hope Bit – See Through Carbon

While the engineers tasked with facilitating this latest human folly to supercharge our fossil fuel addiction were panicking about who was going to tell them to stop, See Through News, and its sibling organisation, See Through Carbon, was trying to find a solution that 

  1. Used Artificial Intelligence to reduce carbon, not increase it
  2. Didn’t require money

The pilot See Through Carbon Competition, giving away half a million dollars of supercomputing to benefit sustainability in the world’s poorest countries, demonstrated that machine learning and cloud computing can be leveraged to massively reduce carbon. 

The See Through News Podcast series AI’s Dirty Secret, or How To Spend Half A Million Dollars of Supercomputing tells that remarkable story over 10 cliffhanger episodes, or you can read the latest in this Competition update article

These and other See Through projects are trying to use advanced computing technology to measurably reduce carbon, at a time when, unregulated, it threatens to blow any carbon savings we’re making elsewhere into oblivion, driven by our unconstrained demand for cat videos and pictures of puppies in funny hats.

Diligent future historians, researching the pivotal year of 2023, will find the See Through Carbon Competition and the AI Safety Summit both took place the same year.

Which do you think will look like it was addressing the real issues, now?