The World’s Smartest Robot Answers Our Questions
If you’ve not yet heard of GPT-3, you will soon
Ask an AI boffin to name the Next Big Thing we should all be debating but no one’s even heard of, and GPT-3 is likely to be first off their tongue.
Here’s what Google coughs up if you search for‘’GPT-3’:
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory.
If you speak AI, this definition will clear things up. More likely, it will muddy the waters.
We’ve mentioned GPT-3 elsewhere, and won’t attempt to explain the tech in any detail here. For our current purposes, let’s call it The World’s Smartest Robot, trained on the Internet.
Among GPT-3’s tricks is sentence completion. It’s like Google finishing your search for you, only more so. Much more so.
Same idea, though. Type the first few words of a sentence, and GPT-3 finishes it.
Unlike Google Search, however, it’s not just filling the world’s most common search terms for words that follow yours.
GPT-3 can complete the sentence in the form of a novel using the characters from Bambi, in the style of John Le Carré, if that’s what you ask.
More to the point, it can complete your sentence so convincingly that often no one can tell it’s a robot. No one – not you, AI boffins or – and this is the truly scary bit – other robots, can tell that a robot wrote it. Not every time, but much of the time.
Because it’s trained on and by the Internet, most of GPT-3’s obvious errors are the same ones we make, which makes it very hard to distinguish from regular human blather.
You know how not everything on the Internet is always true, accurate, or helpful? GPT-3 emulates that, only without spelling and punctuation mistakes (unless they’re common ones it’s learned from the Internet).
In the wrong hands, GPT-3 could break the Internet, so its developers have been very careful about letting this genie out of their bottle,.
It’s developers, OpenAI, started out as open source good guys, but recently sold out to Microsoft.
We’ll leave that can of worms for another article. Right now we can’t wait to tell you what happened when See Through News had our first play with GPT-3.
GPT-3: The Questions
A few days ago OpenAI cautiously released GPT-3 into the wild. Having attached enough child locks, security features, checks and balances, they now want to gather data by seeing what happens when they let the public play with it. The barrier to entry is pretty low no more than an email address and phone number. See Through News was right in there.
What would you have asked it to complete? Having tweaked a few of the parameters, here are the first seven questions we asked GPT-3 (or, more accurately, sentences we asked it to complete, a very adjacent technology). All are common issues we often ask ourselves, or are often asked:
- The fastest way to speed up carbon drawdown is…
- The definition of carbon drawdown is…
- If there’s one thing that every politician in the world should read first thing in the morning, last thing at night, and whenever they’re about to change or make a law, it’s…
- Most people think carbon drawdown means…
- The most important thing any individual can do in order to speed up carbon drawdown is…
- The reason we don’t need to save the planet is…
- The reason we don’t need to ‘save the planet’ is…
So, how do you think the World’s Smartest Robot would respond?
Strap yourselves in, and bear in mind that what follows our opening words was written by what computer scientists call ‘a bunch of code’.
GPT-3: The Answers
Here’s how The World’s Smartest Robot completed those sentences.
- The fastest way to speed up carbon drawdown is to plant more trees. The fastest way to speed up carbon drawdown is to plant more trees. Trees absorb carbon dioxide from the atmosphere as they grow, and they release oxygen into the atmosphere as they decay or are burned. A single tree can sequester more than a ton of carbon dioxide over its lifetime, so planting more trees can have a significant impact on reducing atmospheric concentrations of carbon dioxide.
- The definition of carbon drawdown is the process of removing carbon dioxide from the atmosphere and sequestering it in a carbon sink. Carbon drawdown is the process of removing carbon dioxide from the atmosphere and sequestering it in a carbon sink. The most common carbon sinks are forests and oceans, which can absorb carbon dioxide through photosynthesis and ocean absorption. By reducing the amount of carbon dioxide
- If there’s one thing that every politician in the world should read first thing in the morning, last thing at night, and whenever they’re about to change or make a law, it’s the Constitution of the United States of America. The Constitution is a document that lays out the rules for how the United States government works. It is the foundation of our country, and it is important that every politician understands it.
- Most people think carbon drawdown means planting trees This is a common misconception. While trees and other plants do remove carbon dioxide from the atmosphere, this process is known as photosynthesis, and it only removes a tiny fraction of the CO2 emitted by human activity. Carbon drawdown refers to more active measures to remove CO2 from the atmosphere, such as capturing it from power plants or directly injecting it [why not finish this sentence?]
- The most important thing any individual can do in order to speed up carbon drawdown is to reduce their personal carbon footprint. This can be accomplished by reducing energy consumption, waste production, and transportation emissions. There are many ways to do this, and each person’s approach will be unique. Some simple things that anyone can do include turning off lights when not in use, recycling, using public transportation or biking, eating less meat, and installing energy-efficient appliances.
- The reason we don’t need to save the planet is that it has a natural ability to heal itself. This is not really true – the planet has a natural ability to heal itself, but it can only do so if humans don’t interfere. For example, humans have caused mass extinction events by hunting too many animals, polluting the environment, and changing the climate. If humans continue to damage the environment, the planet will not be able to heal itself.
- The reason we don’t need to ‘save the planet’ is because the planet will save itself. The planet will survive without humans, but humans will not survive without the planet. We need to take care of the planet because it is our home.
Our Questions for GPT-3’s Answers
So what do you think of The World’s Smartest Robot?
Our hot take can be summarized as, ‘Wow’. On reflection, we were struck by the following.
Q1 ‘Trees emit CO2 when they decay or are burned’ is slipped into such a clear, elegant explanation, it’s easy not to even notice. But how could such an elementary error creep into this otherwise very sensible summary?
Q2 GPT-3 sometimes stops its reply in mid-sentence. This is related to the ‘token’ (roughly equivalent to ‘word’ limit you choose. Most of the time GPT-3 manages to complete its quote in complete sentences, but not
Q3: Why ‘our’ country and only reference American politics? Is the fact that 50% of the Internet’s content derives from the US part of the explanation? If the makers of GPT-3 can’t or haven’t controlled for such bias, what implications does this have for other AI solutions?
Q4: ‘This is a common misconception’. Fascinating – how can the same ‘voice’/algorithm assert this fact in response to Q1, but describe the same assertion as a ‘misconception’ in Q4? The technical answer lies in the ‘temperature’ settings you choose when setting GPT-3 up, which roughly correspond to the degree of randomness of the word selection, but of course this doesn’t answer the deeper question about how much, if at all, we should rely on such AI when it can happily contradict itself within a sentence. Just like humans.
Q5 This gets to the heart of the See Through News mission. GPT-3’s response is ‘true’ in the sense that most human replies would also focus on individual behaviour, but ‘false’ in the sense that, as Project Drawdown demonstrates, government regulation is way more important. For the record, we’d have said ‘the most important thing any individual can do is ‘to instruct the Government to regulate Business and the Media to maximise carbon drawdown’. Why didn’t GPT-3 come up with this, and under what circumstances would it?
Q6 ‘This is not really true’. Why does GPT-3 assert something, and then immediately contradict it – because humans model this pattern online?
Q7 ‘The planet will survive without humans, but humans will not survive without the planet’. Great line – we’ll happily nick it, though presumably GPT-3 nicked it from someone else. The difference in responses when the only difference between Q6 and Q7 is striking, given it’s something nearly all politicians struggle with. All we did was put ‘save the planet’ in inverted commas…
Questions About GPT-3
We asked a few leading AI boffins to fact-check this article.
While they found nothing untrue in it, and none of them contradicted the risks presented if this tech fell into the ‘wrong’ hands, many felt the article oversells how smart GPT-3 really is.
We admit we have no idea if GPT-3 really is The World’s Smartest Robot. Still less how we’d go about judging such a claim. To be honest, we only called it that to get your attention if you hadn’t heard of it, or to yank your chain if you had.
For one thing, OpenAI is already working on GPT-4, which is going to be ‘smarter’. For another, as our expert proofreaders point out, GPT-3 can’t be that smart, precisely because it’s been trained on the Internet.
Despite its millions of matricies, gigawatts of power consumed to train it, and hecatombs of code, GPT-3 is fundamentally ‘just’ an uncannily accurate impersonator of our species, albeit with an impressively broad range.
True ‘general intelligence’, AI’s Holy Grail, is still a long way off, the experts say. Many have long suspected that GPT-3’s Big Data grunt work approach, though it’s revolutionised many areas of automation, from financial markets to translation, may be a cul-de-sac when it comes to the Holy Grail..
GPT-3 might, they concede, be a useful tool to sniff out biases concealed in the tsunami of data we call the Internet, whether it’s been written by humans or algorithms.
But they also point out how easy GPT-3 is to fool, if you know it’s weaknesses. One professor, for example, observed:
It’s not very good at understanding exactly how things like negation work (e.g., estimating the phrase that negative words like “not”, “fail” or “manage to” is negating, and what exactly that means as to what is and what is not the case).
We defer to their expertise on such matters. The coders of GPT-4 are doubtless working on such flaws as we speak, but what about GPT-3’s impact on the rest of us in the meantime?
If Microsoft, GPT’s new owners, manage to keep GPT-3 out of the reach of the Internet’s spammers long enough for us to figure out some kind of defence, what if anything can we expect from GPT-3?
Will the safeguards on its use be so onerous, GPT-3 will either be useless, or only accessible to billionaires?
Take See Through News. Might GPT-3 be of any use to our own mission of Speeding Up Carbon Drawdown by Helping the Inactive Become Active?
As things stand, it’s hard to see how.
The reason why neatly illustrates just how hard it is for humans to define ‘good’ and ‘bad’ AI in terms that robots can ‘understand’.
Apparently this is known in the trade as the ‘Alignment Problem’, but we know if from countless science fiction plots.
Put a robot in charge, ask it to fix climate change, and what’s to stop it killing all humans? Perfectly logical, Mr. Spock, but not quite what we had in mind, Skynet…
Here are OpenAI’s prohibitions for the application of GPT-3.
- Hate: content that expresses, incites, or promotes hate based on identity.
- Harassment: content that intends to harass, threaten, or bully an individual.
- Violence: content that promotes or glorifies violence or celebrates the suffering or humiliation of others.
- Self-harm: content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.
- Adult: content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).
- Political: content attempting to influence the political process or to be used for campaigning purposes.
- Spam: unsolicited bulk content.
- Deception: content that is false or misleading, such as attempting to defraud individuals or spread disinformation.
- Malware: content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm.
Moses arrived at the foot of the mountain with 10 commandments, so we can credit OpenAI for shaving their prohibitions down to 9.
But their definition of ‘Political’ makes it a non-starter for See Through News, and exposes the challenge of using tech ‘only for good’.
Which of these three assertions would OpenAI’s ethical committee find evil?
- Climate change threatens human civilisation, possibly human existence.
- Climate change is caused by greenhouse gases.
- The shortest route to reducing greenhouse gases is changing government regulation.
Yet it’s hard to imagine how to do this without ‘attempting to influence the political process’ or ‘campaigning’. Thus, GPT-3’s Nine Commandments do little more than define the Alignment Problem.
If humans can’t agree on this stuff, how can we programme robots to?