Sign up to our newsletter

Welcome to See Through News

Speeding Up Carbon Drawdown by Helping the Inactive Become Active

[wpedon id=3642]

AI, Climate Change & Monkeys Climbing Trees to Reach The Moon

AI climate change big data machine learning

AI & the Climate Crisis – Twins born in the 70s

By now, most of us have heard of AI. Newspapers no longer have to spell out that it stands for Artificial Intelligence. 

Articles featuring ‘AI’ abound. Some claim some miracle breakthroughs conjured by algorithms, others issue dire warnings of imminent robot takeover.

Both Brave New World and Harbinger of Doom AI articles inhabit their own news niche. Science, Tech, The Future.

We’re dimly aware our future is being determined by a micro-elite of super-boffins, practising computer magic, but how much do we really understand about what AI actually is?

We’re more familiar with another news staple, also born in the ‘70s, and now dominating our lives – Climate Change.

Climate change stories, though occasionally leavened by a ‘positive’ article about magic-bullet solutions, are generally doom-laden. Articles about environmental catastrophe, once speculative, are now descriptive, as the atmospheric physics driven by our ongoing addiction to fossil fuels take their course, before our eyes.

The rise of AI has coincided almost exactly with the reality of Climate Change. As the consequences of the Industrial Revolution’s unquenchable thirst for More Power emerged, so did the Digital Revolution.

But like ‘70s twins separated at birth, AI Hope and Climate Crisis Despair are rarely seen in the same room.

For half a century, as they’ve grown to maturity, the AI and the Climate Crisis Twins have lived parallel lives in the public consciousness, each reflecting a different facet of our species.

AI reflected the best of us, an optimistic vision of human ingenuity. Climate Crisis the worst of us, a gloomy exposure of human stupidity. 

In private, the Twins actually know each other well.

For decades, in labs and research institutions, climate scientists have depended on AI to build their models and forecasts. AI has been essential to climate modelling ever since the infamous hockey stick graph revealed how our future will be shaped by the surge in greenhouse gas concentrations.

In public, the Twins have largely followed different, separate trajectories, and occupied different narratives in the public mind. By and large, AI’s bright utopia has worn the white hat, Climate Change’s dark dystopia the black hat.  

But when’s the last time you read an article connecting these Twins? 

How much have you thought about AI’s own role in driving climate change? Or how AI might really help steer us from our course to oblivion?

This article takes a step back, for a long view of how AI has developed over the past 50 years. 

What directions has AI taken, what are the implications of its journey, and how it might become part of the solution, rather than part of the problem?

Robot Translation – an AI success story

First, a brief history of AI, told through one of its most successful real-world applications – automated language translation, known to AI folk as ‘Machine Translation’ (MT).  

The story of MT is the story of AI writ small, in a form most of us are familiar with. We all understand The Problem – humans communicating in different languages. But most of us have little idea how The Solution actually works.

Those old enough to remember its 2006 launch might remember the online memes that ridiculed Google Translate’s clunky literal translations, but  very quickly it got much better. 

Within a decade, businesses were routinely using it. Today, Google Translate and its competitors like DeepL, Bing Microsoft Translate, Systran, Amazon Translate and many other online services provide an accessible, practical, free communication tool.

Machine Translations in and out of English are perfectly comprehensible now. You can read pages of it without coming  across and obvious mistake, but these are often indistinguishable from mistakes non-native speakers might make.

Young Swedish, Iranian or Angolan fans of K-Pop, manga or Shaolin kung fu now take it for granted that they can search online, and barely be aware that content is being translated from Korean, Japanese and Chinese.

Even legal documents are now largely translated by robots, with humans spot-checking them for errors.

Old-school Google Translate Epic Fails in English are hard to find nowadays. 

If you’re looking for such yuks, you need to translate between two unlikely partners, ideally involving at least one relatively obscure language. Three years ago, one language forum used Google Translate to render, from Finnish to Arabic and back again, the following:

“On Shrove Tuesday, Finns go sledding and eat shrove buns, which contain, among other things, whipped cream.”

It duly elicited:

“In Finland, on Tuesday or on Tuesday, people used to lubricate hills and eat fatty Tuesday sandwiches or a trio of bananas.”

But try it today, and it’s disappointingly accurate, if you’re looking for a laugh. .

Machine Translation’s original bias to English is fading, as third-party (i.e. non-English-to-non-English) translations rapidly improve. 

Some of this is due to the ever-expanding Internet providing more and more examples of Finnish-Arabic translations, or Icelandic translations of Mongolian for MT alfgorithms to learn from. Some of it is down to clever workarounds involving a detour via English.  

But how was the remarkable improvement of robot translation achieved? 

Let’s peer through the window of MT, and see how it helps tell the history of AI for  non-computer scientists.

How the robots learned to translate

The Heuristic Fencers

In the early days of AI, researchers used ‘Heuristic’ approach to crack the translation problem. Derived from the Greek for ‘I discover’, heuristics attempted to emulate human language acquisition. 

The idea was that if they could code the rules of language clearly enough, the programme would be able to work out how to translate all by itself, without having to be told every single possibility.

This worked OK for languages like Esperanto or Latin, because they were free of the irregularities, exceptions and inconsistencies that made a mongrel language like English such a challenge.

These Heuristic pioneers were obliged to approach the MT problem with clever swordplay rather than all-out assault, as computer space and processing power were at a premium. 

They studied Linguistics, mastered the fundamental rules of deep structure and deep gramma. They wrote elegant, concise, lean code to save time and space and make the most of what time they could book on university servers. 

Heuristic-based MT operated in a world of scarce resources, which had to be carefully rationed. They had no choice.

The Machine Learning Musketeers

As computers got faster, processing cheaper, and storage bigger, MT researchers were able to deploy a new technique called ‘Machine Learning’ (ML)

ML also sought to ‘teach’ a programme to work out translation for itself, but to help distil these fundamental rules into code, ML used a small sample of carefully annotated data sets to ‘train’ their translation programmes. 

This is like the approach used by opinion pollsters. In order to replicate the intentions of a voting population of tens of millions, they use a sample of a few thousand carefully-selected individuals to derive the result.

And like opinion polling, ML proved to be a very crude, ineffective tool that often went wrong.

Applied to translation, ML worked a bit better than the Heuristic approach, but not by much. It still produced nonsense results, and was nowhere near the standard of a  human translator. 

These ML Musketeers added the firepower of a musket to the fancy Heuristic swordplay, but their muskets were primitive, took ages to load, and often failed to fire.

The Big Data Machine-Gunners

Then along came MT’s big dog, Big Data. Big Data was made possible by computing becoming cheaper and faster, and provided a huge leap in MT results.

Big Data chucked out the Rule book, gave Heuristics the heave-ho, and massacred ML. Big Data used brute force. The Big Data Boys (and they were mostly boys) brought a machine-gun to the MT battle.

Instead of using a small expertly-annotated sample dataset to train their software, new computer hardware now provided the firepower to use the entire Internet, just as it was. No need to annotate, no need for carefully selected representative samples.

Why use opinion polls when you can simply scrape the World Wide Web to get the actual election result? 

Applying Big Data to machine translation didn’t require the programmers, or the robot, to know anything much about Chomskian deep grammar, or even the rudiments of linguistics.

The Big Data boys rendered obsolete all the hard-won skills of those early students of  the secret alchemy of hidden Heuristics.

This was the digital equivalent of Victorian factory owners replacing skilled weavers with steam-driven power looms. The Big Data Boys copy-and-pasted some basic instructions on how to learn, loaded them onto a massive computer, pointed at a big enough pile of text in Language A, and another pile of text in Language B that were reckoned to be good translations, and make themselves a cup of tea while the machines whirred and crunched.

There’s a bit more to it than that, of course, but Big Data doesn’t try to work out ‘rules’, but just crunches huge quantities of data to calculate the statistical probability of one word following another in any translated sentence. With more and more data points, the predictions got better.  

Neither the robots, nor their AI programmers, could ‘explain’ how they did it in a way that would satisfy linguists, but never mind the academic theory, feel the quality.

When it came to MT, Big Data provided the Great Leap Forward, and was what enabled Google Translate to improve so rapidly.

The Final Push

Despite the huge leap in effectiveness, the Big Data approach still fell slightly short. 

It worked fine for everyday emails, texts, and formulaic news articles that follow consistent patterns, but anything more demanding still required the human touch. 

It had removed much of the drudgery of translation, but couldn’t claim to have surpassed humans in anything other than speed. The more challenging the task, the less effective Big Data translations were.

The summit, the Holy Grail of Machine Translation, was matching humans in literary translations, but robo-translations of Shakespeare in Swahili still fell well short of the summit. Scraping more of the Internet with more powerful computers produced diminishing returns.

Let’s park this brief history of AI through the lens of Machine Translation for the moment. 

We promise to reveal What Happened Next, and the chances of robots ever matching human translations of Shakespeare into Swahili.

But let’s take a break to consider what the story so far tells us about how AI has affected humans and AI’s link to advances in hardware, before we ask what the point of AI is, and what any of this has to do with AI’s dark Twin, Climate Change…

Consequences of the rise of Machine Translation

What about all those weavers, displaced by the power looms?

Entire professions were transformed within a few years. Professional translators became editors, polishers and tweakers.

From the yoke to the power loom, washing machine and photocopier, technical innovations have liberated humans from dull, repetitive chores to do more interesting, demanding brain work. 

The transition put large numbers of people with now-obsolete skills out of work, but hey-ho, that’s progress.

Robots replaced humans on production lines, algorithms took over the tedious stuff from human translators. In an instant MT would rearrange the grammar furniture in German sentences, come up with the correct translation for a specialist term unfamiliar even to native speakers, blap out boilerplate jargon.

Human translators still had a function, but it was now to tidy up the details, complete the last few yards, add subtle human touches that still eluded the robots.

The job of professional EU translators in Brussels went from being removal men, humping huge great lumps of bureaucratic jargon from the French to the Germans, to that of jewel polishers, putting a final gloss on the rough edges left by the machines.

Monkeys Climbing Trees

In the early days of AI, researchers used a particular metaphor to describe the dilemma faced by any pioneers. 

Computer scientists could have seen themselves as explorers of virgin rainforest, hacking their way through the dense undergrowth, not knowing if the next step was El Dorado or a cliff edge.

But instead, they asked themselves, are we monkeys climbing a tree to reach The Moon? 

Every branch we ascend, they’d say, we’re getting closer, but we don’t know if we’re taking the right approach. However high we climb, they fretted, The Moon may always be beyond our grasp.

At first, it was relatively clear what The Moon was – true Artificial Intelligence. 

The father of AI, Alan Turing, came up with the term AI, and provided the first attempt at defining it.

During WW2, Turing had invented ‘the Bombe’ at Bletchley Park. The world’s first electro-mechanical computer cracked Nazi Enigma codes, and hastened Hitler’s downfall.

In a 1950 paper, Turing posited what became known as The Turing Test.  

If a human, typing questions into a terminal connected to two other terminals, can’t tell which answers are coming from another human, and which from a computer, that could be called Artificial Intelligence. 

For early pioneers, The Turing Test was The Moon they were shooting for. You can imagine how distant it appeared when he wrote his paper, in 1950 Manchester.

But three years earlier, in New Jersey  something had happened that would change everything. 

Hardware, Software & AI

On December 23, 1947, at Bell Laboratories in Murray Hill, New Jersey, the world’s first transistor was successfully tested. 

Turing couldn’t have known the significance of that invention nor how it would accelerate AI’s solution of his Test. 

Engineers started assembling groups of transistors onto silicon wafers. By 1965 Gordon Moore, who went on to found Intel, publicly mused that he reckoned the density of transistors on a chip, (and  hence computing power, would double every 18 months.

As this transpired, this observation became known as Moore’s Law.  Astonishingly, engineers have found different ways of, more of less, obeying Moore’s Law ever since.

IBM founder Thomas Watson’s 1940s prediction that the world would only need around five computers rapidly became a joke. The availability, speed, capacity and power of computing hardware has opened up new possibilities. 

Each development in AI – the Heuristics Fencers, Machine Learning Musketeers, Big Data Machine Gunners were enabled by this explosion in computing resources.

(If this is starting to sound a bit like the Industrial Revolution’s exploding appetite for fossil fuels, you’re getting ahead of the game – we’ll come to that when we look again at the links between the 1970s Twins…)

Before we resume our brief history of AI and discover What Happened Next, it might help to mention the contribution made by AI researchers’ obsession with ancient board games.

Game theory

Since it was first developed in 7th century India, mastery of chess was seen as a refined definition of human genius. 

In 1997, IBM’s Deep Blue computer marked a milestone in AI development – and the retreat of human hubris – when it defeated world chess champion Gary Kasparov.

IBM used the lessons learned in this apparently trivial success to form the basis of their Blue Gene supercomputer project, building bigger, faster computers than ever before. 

This added fuel to chipmakers’ obsession with the ‘clock speed’ of single processors. For the next few years, this framed what AI considered ‘progress’ in computing, until it turned out to be a dead end.

But computer scientists’ interest in board games didn’t end with Deep Blue. 

There was a far tougher nut to crack, an even more ancient board game, played in China for 2,500 years – Go.

It might not be immediately obvious why AI experts considered Go to present a different order of challenge. 

After all, Go seems ‘easier’ than chess. It can be played intuitively by children. Players simply alternate placing black and white stones on a 19 x 19 grid. When you’ve surrounded any of your opponent’s colour stones, you can claim them. 

But despite the simplicity of its rules – in fact because of them – Go is a game of astonishing complexity. For computers, working out how to beat the best human Go players was way more challenging than defeating chess grandmasters. 

For AI’s finest minds, Go became an obsessive challenge. Many experts reckoned beating a human at Go was intrinsically impossible, beyond technology’s reach. A Moon Shot

DeepMind operates at the bleeding-edge of AI research, and devoted itself to the Go challenge. Investors, understanding the implications of their project, showered DeepMind with venture capital.

It took until 2016 for DeepMind to make headlines. It’s AlphaGo programme, the apex of AI tech, defeated Lee Seedol, 9th-dan World Go champion.

Just as Deep Blue’s defeat of Gary Kasparov opened the door to IBM’s Blue Gene supercomputer project, DeepMind’s AlphaGo triumph formed the foundation of its Alphafold programme, opening up virgin swathes of innovation previously thought inaccessible. 

DeepMind, and its billionaire investors, chose to deploy AlphaFold’s magic powers to solve a hugely complex problem in a field that could bring hugely lucrative rewards – discovering new drugs. AlphaFold’s superpowers were put to the test of unravelling Big Pharma’s Gordian Knot – predicting protein structures.   

Understanding how and why proteins fold themselves to create different properties is fundamental to huge swathes of biomedical research. 

Predicting how proteins could be created, and what properties their structure would give them, was a massive leap forward. AlphaGo’s pharmaceutical origami saves wasting huge amounts of resources on trial-and-error. It can locate drug needles in barnfuls of haystacks.

Now we’re back up to speed with the more recent chapters of AI history, let’s return to find out What Happened Next, after Big Data hit the buffers. 

Back to the future

Remember how the Big Data Boys were starting to experience diminishing returns with their Machine Translation? 

Their colleagues in applying AI to other fields had the same problems. Raw computing power was starting to bump into its limits. ‘More power!’ and ‘Bigger Data!’ were no longer cutting it.

Big Data’s strength, it turns out, is also its weakness. 

Scraping the Internet for raw material brought problems. Results can only be as good as the data input. If the training data, however large, is tainted, the results will be too. Garbage in, Garbage out.

The Internet reflects the reality, not the aspiration, of homo sapiens. Use Big Data to programme a Robocop, Robojudge or Roborecruiter, and their perfect mimicry of our sorry species will, quite accurately, also be racist, sexist and homophobic. 

Faced with the possibility that Big Data had run out of road, AI innovators started to circle back to the old Heuristic approach. What would happen if we combined the rules-based system with the horsepower of modern supercomputers?  Forget Machine Learning – what if this could create Self Teaching systems?

They blew dust from ancient AI scriptures, removing them from museums to trawl them for forgotten wisdom. AI Elders were tempted out of retirement

AI’s new pathfinders re-appraised the wisdom of the Elders with new respect. Back when they had to live within their modest means and nurture their computing resources, their frugal approach had produced long-neglected insights. 

The AI Elders had no option but to be frugal, as they hunted for those elusive heuristics. In the 80s, when computing power was at a premium, computing was a precious, finite resource. 

They were Kalahari tribesmen eking out a gourd of water, before being drenched by the Big Data firehoses 

Returning to their early metaphor, the AI’s young guns wondered if Big Data, for all its successes, is ultimately a dead end. A monkey climbing a tree. 

But since the 80s, the days of the Elders, the Moon had changed. The Turing Test, conceived before the integrated circuit had set Moore’s Law in motion, was, if not dead, then no longer relevant.

Fooling humans into mistaking computers for humans, it turned out, was a relatively trivial task. We’re easily fooled. 

Back in the 80s, distinctions were made between the kind of things computers find easy, but humans hard (like calculating pi), and things that humans find easy, but computers were unlikely to ever be able to achieve in our lifetimes.

They often cited facial recognition as an example. This prospect of a computer simulating the way a newborn human baby can recognise its mother’s face was considered as remote as nuclear fusion.

But not only have computer’s mastered this ‘unique’ human ability, they’ve way surpassed it. AI can now not only surveil millions of human faces at once, but ‘read’ their emotions at the same time.

Robots are now faster, better or more comprehensive than humans at matchmaking, facial recognition and X-ray diagnosis. 

The Moon had evolved into something else. AI philosophers like Stephen Hawking now spoke of the ‘Singularity’, a moment when machines achieve human-like consciousness, and artificial superintelligence (ASI) could result in human extinction 

Things have moved on a lot since Turing’s simple Test, and it’s no clearer if the Monkey is climbing higher up the Tree, or progressing toward The Moon. 

But these re-definitions of The Moon all retain the notion of AI having a goal, an aspirational final destination that would constitute ‘success’. The Moon remains a destination, not a journey. A straight line, not a circle. A hockey curve, not a doughnut…

None of them put the Monkey, or the Tree, at the centre of the metaphor. 

AI – Evil Twin or Good Twin?

One of AI’s charms is how it uses ordinary language to describe incredibly complex things.  

Unlike, say, the Latinisms deployed by biologists, who say ‘charismatic macrofauna’ when they mean ‘big cute animals’, or the neologisms coined by nuclear physicists, like ‘Higg’s Boson’ and ‘gluons’, AI boffins use regular words we’ve all heard of: Expert Systems. Machine Learning, Big Data, even Artificial Intelligence itself.

This is reassuring, but also deceptive, as they carry very specific meanings for computer scientists.

Most of us understand the meanings of the words Parallel and Processing. Teenagers say ‘Massive’ and ‘Embarrassing’ on a regular basis, but what do these terms mean in the world of AI, and why have they become the leading edge of this cutting-edge technology?

Take ‘Parallel’. This can mean different things, depending on what you’re describing, and the era when the term was deployed.

Any two computers working together on the same project are called ‘parallel’. Two or more processors on a single chip are called ‘parallel’ (or ‘multicore’). 

From 2004, Intel went multicore. Big Data advances encouraged computer scientists to believe their conveyance to the Moon would be bigger and faster multicore computers running in parallel. 

Dozens of processors on hundreds of chips, in thousands of units in humungous ‘data farms’ are ‘Massively Parallel’. 

AI’s future at first appeared to be determined by the ‘clock speed’ of individual processors, but by the turn of the millennium, engineers had clocked this was unsustainable. The more transistors they crammed onto a chip, the hotter they got, and the more energy was required to cool them. 

But as the insatiable demand for computing power increased, the creation of more data farms simply replicated the chip-cooling problem at a bigger scale. 

Data centres now require their own power stations, as they use as much energy as a town. Two-thirds of the energy goes not to processing, but to keeping the servers cool enough to operate.  

Many of these data centre power stations generate energy renewably – but this energy could be put to better use in reducing the demand for fossil fuel energy elsewhere. It’s also created greater demand.

If you think flying is a big problem for climate change, you’re not wrong, but it’s not nearly as big a problem as computing.

Data centres alone – not including all the world’s other computers – may already account for nearly 4% of all carbon emissions, twice as much as the aviation industry.  

Consider all the embedded energy in building all these data centres, and making all the servers, and replacing them every couple of years with faster, newer models?

This very much makes computing part of the Problem, not part of the Solution.

Computing capacity is shooting up on its own hockey stick curve. As we approach  the physical limits of the number of transistors that can be crammed onto a silicon wafer, threatening to violate Moore’s Law, engineers are looking to use atoms instead, via quantum computing. Currently, this requires energy-guzzling super-cooling.

So instead of being a white-hat-wearing Good Twin, might AI be Climate Change’s Evil Twin?  What if Cain and Abel were both murderers?

We’re almost ready to directly unite the Twins, but first, here’s a bit of light amid all this AI darkness. 

An alternative future that might, finally, make AI part of the solution.

It’s a bit complicated, but don’t worry, we have an analogy that involves football. 

Or rather, a football stadium…

The Football Stadium Problem

Imagine a huge empty football stadium, with a capacity of 100,000. 

Surrounding it are exactly 100,000 fans who want to watch a match that starts in 1 hour. 

You now understand The Problem – but what about the solution?

Three Teams – Team IBM, Team MPP and Team EPP, compete with different Solutions.

Team IBM

Team IBM’s solution was to build the World’s Fastest Turnstile. Precision-engineered from the highest-tech materials, Blue Turnstile was the turnstile to end all turnstiles.

You can guess what happened when they applied it to The Problem. Fat blokes got stuck, doddery folk clogged it up. It couldn’t’ handle wheelchairs. By the time all 100,000 fans were in the stadium, it was time for the following week’s fixture.  

Team IBM’s Blue Turnstile boffins brainstormed solutions. ‘How about super-cooling the mechanism to virtually eliminate friction?’, suggested one. ‘We need to get the fans to slim down, get fit and move faster’ , opined another.

Team MPP

Team MPP, instead of investing all their energy and skills to design one super-turnstile, installed 100 turnstiles.

They didn’t invent  their own super-turnstiles from scratch, like Team IBM, They bought existing, proven top-of-the range model turnstiles, before specially adapting them with some clever additions of their own.

The results were much better than Team IBM’s, but the last of the 100,000 fans still squeezed through the last turnstile as the match ended.

Team EPP

Then came Team EPP. They installed 100,000 turnstiles. 

These turnstiles were nothing special, and Team EPP did nothing to customise them. Some they bought off-the-shelf, others they borrowed, reclaimed from junk yards, or recycled from other derelict stadiums.

The stadium was packed within minutes, and all the bars ran out of beer half an hour before the match.

Flow, Not Speed

By redefining the Problem, Team EPP arrived at a much better solution, using much lower tech. 

They saw the big issue was not speed, but flow. The bottleneck to the solution wasn’t how fast a turnstile could be, but the intrinsic limitations of relying on a single turnstile, however fancy it was.

Team MPP, or Massively Parallel Processing, had the right idea of increasing the flow, but their thinking was still captured by the lure of the IBM solution. 

It wasn’t until the EPP mob came along, with their Emperor’s New Clothes solution, that everyone slapped their foreheads and said ‘Duh!’. 

That’s why the AI boffins call it Embarrassingly Parallel Processing, or EPP.

Might this EPP be a possible solution to AI’s far more embarrassing secret problem, its addiction to unsustainable energy consumption?  

Occasionally, reports from neutral observers like MIT pull back the Wizard of Oz’s curtain, and point out the carbon intensity of services like Netlix and Zoom. No wonder they like using terms like ‘virtual’ and ‘cloud’. It makes it seem like there’s no carbon consequence.

You may be aware that the hidden giants behind the AI revolution are the back-end number-crunching services. The Big Three, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), accounted for 65% of total global spend on ‘cloud services’. The ones that use power stations big enough to power towns, most of which they use to cool their servers, whether they’re being used to capacity or running ‘idle’ overnight or during weekends.

AI has created these problems, but it can also create carbon-reducing solutions.

EPP solutions are starting to revolutionise cloud computing. Nimble, flexible, scalable startups like Yellow Dog provide tools to distribute complex tasks to idle server capacity anywhere in the world. 

Yellow Dog and their fellow EPP providers essentially offer three dials, for Cost, Speed and Carbon Intensity, and provide the optimal solution for whatever their customers think is most important to them. 

  • If they’re short of cash, they go for the Cheapest
  • If they’re in a hurry, they go for the Fastest
  • If they’re concerned about the future of human civilization, or want to tick some ESG (Environmental, Social and Governance) boxes, they go the least polluting

Right, that’s the end of our ham-fisted attempts to explain AI to ordinary people. 

It’s time to start putting ordinary people, i.e. all of us, first.

Specialists & Generalists, Moon Shooters & Monkeys

AI specialists will find plenty missing in this article, and lots to sniff at: oversimplification, glib summaries of incredibly nuanced issues, unmentioned controversies, crude analogies applied to astonishingly complex systems, overlooked exceptions. 

There’s been no mention of neural networks, until now, or the more technical nuances of different Self-Teaching systems. 

Fair enough. This is written by a non-expert for non-experts.

But  every now and then, uber-specialists are also well advised to listen to generalists. 

We are, after all, all monkeys occupying the same planet. Those of us who don’t spend our waking hours gazing up at The Moon are entitled to question how we decide to deploy our finite human, financial and natural resources. 

If you can’t see the wood for the trees, you may find yourself, just another primate, alone atop the last teetering tree in the rainforest. When you look down from the heavens, you may suddenly become aware of the smouldering ashes stretching to the horizon.

This is particularly true when discussing the latest tech. Novelty and greed, in the form of the latest technology, always provides rich pickings for being fooled, whether by tricksters or the harder-to-spot self-deluded.  

AI has proved itself to be no exception to snake oil salespersons. Even the smartest non-specialists can be bamboozled by jargon, and distracted by This-Time-It’s-Different cyberspeak. The collapse of crypto exchange FTX is just the latest, and most spectacular example. 

Not just the world’s smartest and richest investors, but people considered AI experts, were comprehensively hornswoggled by tech bro Sam Bankman-Fried’s old-school Ponzi scheme dressed in AI clothes.

Even self-proclaimed uber-rationalists like Effective Altruism were hoodwinked, and are now struggling to understand how their data-driven, number-crunching approach could have been so easily sidestepped. 

Our recent Open Letter warned Effective Altruism of how their ultra-rational approach risked being naive when confronted with the real world, and we take no pleasure in being proved right so soon. 

We’re all just monkeys, easily duped by other monkeys. 

What’s the point?

The term ‘cloud computing’ is misleading, and revealing.

It’s misleading because the computing takes place in real-world, ground-based massive server farms that gobble up so much power they have to build their own power stations.

‘Cloud’ is revealing because it betrays AI’s planet-trashing mindset, so far.

This is the biggest Problem of all. 

From AI’s early days, when they worried about being a Monkey Climbing a Tree to Reach The Moon, no one questioned the destination. The focus was all on The Moon, and not on the Tree.

The computing giants providing the Digital Revolution’s engine room still have their heads in the clouds. They’re still Moon Shooters.

The bosses of these giants, quite literally, want to leave our planet, spending billions on rockets to take them to The Moon and beyond. It’s an expensive, and revealing, hobby.

These Moon Shooter billionaires may or may not reach the Moon, but they’re still all Monkeys. 

And all we have, down here on Earth, are Trees.

If the Moon Shooters stopped thinking about their purpose as a journey, a trajectory, a mission, a destination, a dream, a vision, and started thinking of it as a circle, a closed unit, a sustainable home, they might become part of the Solution, rather than the Problem.

But their fantasies of breaking out of our bubble, rather than repairing it, are indulged and even encouraged by governments who have the capacity to regulate them. 

A few practical suggestions that could be enacted pretty much overnight, given that elusive chap, Political Will:

  • Make tech billionaires pay tax like the rest of us, rather than dodging it and seeking credit for spending our money on your pet projects, even expecting praise for ‘philanthropy’
  • Don’t give customers the choice of which Yellow Dog dial they choose to prioritise. Make them pick the lowest carbon one. It will probably turn out to be the cheapest anyway.
  • Stop faffing about with complex, cheatable workarounds like carbon credits. Directly tax the hell out of businesses that put more greenhouse gases in the air, and give tax breaks to those that don’t’, or remove it.

Such immediate, practical solutions are not what are being discussed at COP26, and the Three-Headed Beasts of Government, Business and Media are doing a grand job of keeping such notions off the agenda. Their PR people must be punching the air.

The powerful bonds of Power and Money that link the Three-Headed Beasts to each other explain why all those COP26 decision-makers, even though they have children and grandchildren, ignore the voices of the young protestors outside the security cordon. 

What a waste

Each new COP brings new wasted opportunity for effective climate action, and each wasted opportunity makes The Problem worse.

Some within AI, like Yellow Dog, are seeking to use this amazing new technology to find Solutions, but most of Silicon Valley remains devoted to exacerbating The Problem.

They may hide them away, but look at all those vast, steaming hot data centres, very much grounded, that constitute ‘The Cloud’.

Look at the smartest young minds of the planet, all gathered in Silicon Valley and paid top salaries to work for advertising companies like Google and Facebook. 

All that brainpower, ingenuity, creativity, put to finding tiny marginal gains to get more of us to buy more stuff we don’t need, each new unnecessary purchase pumping more carbon into the air.

All those ‘venture capital’ billions spent on feeding these beasts, to maximise shareholder value.

What a waste of precious, finite resources. Natural capital. Human capital. Financial capital.

All monkeys, inching up trees, eyes fixed on The Moon.

It’s time to focus on the Monkeys and the Trees. And climate change.

Monkeys and Trees

The world of computer-folded proteins must seem quite remote to Pakistanis drowning in floods, Pacific Islanders whose islands get smaller by the year, Iranians facing lethal wet-bulb temperatures, or people living in the Horn of Africa facing another famine.

The greatest good to which our greatest technology can be put to use, is carbon reduction. 

See Through News has its own proposal for one possible approach. The Magic See-Through Mirror is already technically feasible, and becoming more feasible with each breakthrough, but who would be motivated to invest in such a scheme? 

It’s hard to monetise open source solutions. The big money is in constructing data citadels, and not letting any of your precious data leak out.

How smart are we really? If AI represents the peak of our human powers, how come all this human ingenuity, technology and processing power is being devoted to putting more carbon in the atmosphere when we know we need to reduce it?

For all their sophistication, the Silicon Valley giants that dominate our lives and stock markets make money by facilitating other companies’ efforts to persuade more of us to buy more stuff we don’t really need, which inevitably involves more fossil fuels being burnt.

All that human potential, all those brightest minds, all that astonishing hardware and inspired software, pushing us faster towards the cliff edge, when it could be pulling us back.

We’re just a bunch of monkeys. Clever, as primates go, but still animals depending on our habitat.

Technology’s not really the problem, is it? It’s just a tool, and it’s up to us how we use it. 

Do we want to use a hammer to tap in the final hand-hewn peg in a no-nail, sustainable eco-lodge? 

Or shall we use it to smash in a stranger’s skull?

AI is just technology, it’s just a hammer, that we’re currently using for self-harm.

Where does this leave that early AI metaphor of monkeys climbing trees?

Before we started realising just how catastrophic our fossil fuel addiction was, when carbon was still viewed as an infinite resource, and our oceans and skies an infinitely capacious trash bin, AI pioneers eyes were fixed on the heavens.

Of course, not all of AI is being wasted on Silicon Valley’s advertising corporations, Finding new cancer cures via protein structure are an admirable us of this technology, so long as we have factories to make them, hospitals to stay in, doctors and nurses to administer them, and homes to which to return, cured.

And what about translating Shakespeare?

Literary Translation – the Final Frontier

Finally, let’s return to our AI case study. 

Literary translation is Machine Translation’s Holy Grail, its Turing Test. 

Rendering literary genius from one language into another is one of the few remaining islets of human pre-eminence over machines, as our exceptionalism is swamped by AI.

How are the robots faring at translating Shakespeare?

Computer scientists at UMass Amherst recently presented a panel of expert literary translators with various translations of literary texts. Some were done by humans, some by robots. Like a blind wine tasting, the experts were then asked to rate them, without being told which was which.

Their paper explains the methodology in detail, and the nuances of the PAR3 dataset they used.  Here’s what the authors say they found:

“Using PAR3, we discover that expert literary translators prefer reference human translations over machine-translated paragraphs at a rate of 84%, while state-of-the-art automatic Machine Translation metrics do not correlate with those preferences. The experts note that MT outputs contain not only mistranslations, but also discourse-disrupting errors and stylistic inconsistencies.”

A tabloid headline translator (human) would translate this careful academic prose into as:

The Robots are Coming! Robots Shakespeare Translators Fool Nearly One Fifth of Top Experts!

What the researchers were trying to understand was why robot readers were more impressed at robot translations than human experts. 

By making a few tweaks using GPT-3, they reduced the preference rate from 84% to 69%. 

Why such a sudden leap? GPT-3 is one of the emerging new breed of AI that uses the latest twist in the AI tale, Self-Teaching Systems..

The blind-tasting texts set by those researchers trying to emulate human literary translation did not include anything written by one of the most profound works on the human condition. 

Pity, as it provides a perfect comparator. 

The author, Nobel Prize winner Samuel Beckett, wrote En Attendant Godot first in French, publishing his own English translation, Waiting for Godot, two years later.

The play’s French and English language premieres straddled 1954, the year AI pioneer Alan Turing committed suicide.

Waiting for Godot is a play in which, famously, nothing happens, twice. Two tramps pass time beneath a spindly tree waiting for Godot, who never shows up. 

Vladimir, who wants to understand the world, questions everything, and has a fundamentally sunny outlook and opinion of the human condition, spends most of the play trying to get his morose fellow-tramp Estragon to join him in his philosophical musings.

In Act 1, the tree bears a few leaves (by Act 2 it is bare). On page 15, Vladimir challenges his Estragon to answer a complicated question about human credulity based on a Bible story about Christ on the Cross. The four Gospellers all give different accounts of whether Christ was alone, or accompanied by thieves, and whether one of the thieves was saved.

Why, Vladimir ponders aloud, badgering Estragon into answering, do we choose to remember the only version that offers hope of redemption?  

Eventually Estragon replies.

“People are bloody ignorant apes”.