Artificial intelligence has been in the headlines in recent months because of the threat it may pose to humanity. Yet, as Michael Wooldridge reveals, the idea of AI and the fears surrounding it have been around for centuries, long before The Terminator

By Professor Michael Wooldridge

Published: Thursday, 13 July 2023 at 12:00 am


Over the past decade, artificial intelligence (AI) has become the most hyped technology since the World Wide Web. Its development has been driven by enormous investment from big tech companies desperate to steal a march on their competitors – and the future of trillion-dollar businesses hinges on whether it succeeds or fails. Progress has been so rapid that some informed observers even claim that machines with the full range of human capabilities (‘general AI’) may soon be here.

Amid all the excitement, it is easy to forget that AI is not a new field. Indeed, the idea of AI is an ancient one: it appears throughout recorded history in one form or another. Ancient Greek legends feature Hephaestus, divine blacksmith to the gods, who created metal automata to serve Olympus. Jewish folklore has the myth of the golem, a creature fashioned from mud and given life through arcane rituals.

Mary Shelley’s 1818 novel Frankenstein – the urtext for science fiction – is all about creating artificial life. And Fritz Lang’s seminal 1927 film Metropolis established an astonishing number of fantasy horror tropes with its Maschinenmensch – the ‘machine human’ robot that wreaks murderous chaos.

The Turing machine and the Turing test

Actually creating AI, however, remained firmly in the realm of science fiction until the advent of the first digital computers soon after the end of the Second World War. Central to this story is Alan Turing, the brilliant British mathematician best known for his work cracking Nazi ciphers at Bletchley Park. Though his code-breaking work was vital for the Allied war effort, Turing deserves to be at least as well known for his work on the development of computers and AI.

While studying for his PhD in the 1930s, he produced a design for a mathematical device now known as a ‘Turing machine’, providing a blueprint for computers that is still standard today. In 1948, Turing took a job at Manchester University to work on Britain’s first computer, the so-called ‘Manchester baby’. The advent of computers sparked a wave of curiosity about these ‘electronic brains’, which seemed to be capable of dazzling intellectual feats.

Alan Turing deserves to be at least as well known for his work on the development of computers and AI

Turing apparently became frustrated by dogmatic arguments that intelligent machines were impossible and, in a 1950 article in the journal MIND, sought to settle the debate. He proposed a method – which he called the Imitation Game but which is now known as the Turing test – for detecting a machine’s ability to display intelligence. A human interrogator engages in conversations with another person and a machine – but the dialogue is conducted via teleprinter, so the interrogator doesn’t know which is which. Turing argued that if a machine couldn’t be reliably distinguished from a person through such a test, that machine should be considered intelligent.

"A
A poster for the 1927 German science-fiction film Metropolis, which featured a Maschinenmensch (machine-human, or robot) – fuelling fears of AI (Photo by LMPC via Getty Images)

At the same time, on the other side of the Atlantic, US academic John McCarthy had become interested in the possibility of intelligent machines. In 1955, while applying for funding for a scientific conference the following year, he coined the term ‘artificial intelligence’.

McCarthy had grand expectations for his event: he thought that, having brought together researchers with relevant interests, AI would be developed within just a few weeks.In the event, they made little progress at the conference – but McCarthy’s delegates gave birth to a new field, and an unbroken thread connects those scientists through their academic descendants down to today’s AI.

Development of ‘neural network’ technologies

At the end of the 1950s, only a handful of digital computers existed worldwide. Even so, McCarthy and his colleagues had by then constructed computer programs that could learn, solve problems, complete logic puzzles and play games. They assumed that progress would continue to be swift, particularly because computers were rapidly becoming faster and cheaper.

But momentum waned and, by the 1970s, research funding agencies had become frustrated by over-optimistic predictions of progress. Cuts followed, and AI acquired a poor reputation. A new wave of ideas prompted a decade of excitement in the 1980s but, once again, progress stalled – and, once again, AI researchers were accused of overinflating expectations of breakthroughs.

Things really began to change this century with the development of a new class of ‘deep learning’ AI systems based on ‘neural network’ technology – itself a very old idea. Animal brains and nervous systems comprise huge numbers of cells called neurons, connected to one another in vast networks: the human brain, for example, contains tens of billions of neurons, each of which has, on average, of the order of 7,000 connections. Each neuron recognises simple patterns in data received by its network connections, prompting it to communicate with its neighbours via electro-chemical signals.

The human brain contains tens of billions of neurons, each of which has, on average, of the order of 7,000 connections

Human intelligence somehow arises from these interactions. In the 1940s, US researchers Warren McCulloch and Walter Pitts were struck by the idea that electrical circuits might simulate such systems – and the field of neural networks was born. Although they’ve been studied continuously since McCulloch and Pitts’ proposal, it took further scientific advances to make neural networks a practical reality.

Notably, scientists had to work out how to ‘train’ or configure networks. The required breakthroughs were delivered by British-born researcher Geoffrey Hinton and colleagues in the 1980s. This work prompted a short lived flurry of interest in the field, but it died down when it became clear that computer technology of the time was not powerful enough to build useful neural networks.

‘Deep learning’ AI systems

Come the new century, that situation changed: today we live in an age of abundant, cheap computer power and data – both of which are essential for building the deep-learning networks that underpin recent advances in AI.

"Professor
Professor Geoffrey Hinton in 2017. His groundbreaking work on neural networks supercharged the development of AI, but he recently expressed concern that it might threaten the survival of our species (Photo by Julian Simmonds/Shutterstock)

Neural networks represent the core technology underpinning ChatGPT, the AI program released by OpenAI in November 2022. ChatGPT – the neural networks of which comprise around a trillion components each – immediately went viral, and is now used by hundreds of millions of people every day. Some of its success can be attributed to the fact that it feels exactly like the kind of AI we have seen in the movies. Using ChatGPT involves simply having a conversation with something that seems both knowledgeable and smart.

What its neural networks are doing, however, is quite basic. When you type something, ChatGPT simply tries to predict what text should appear next. To do this, it has been trained using vast amounts of data (including all of the text published on the world wide web). Somehow, those huge neural networks and data enable it to provide extraordinarily impressive responses – for all intents and purposes, passing Turing’s test.

The nightmare of losing control

The success of ChatGPT has brought to the fore a primal fear: that we might bring something to life – and then lose control. This is the nightmare of Frankenstein, Metropolis and The Terminator. With the unnerving ability of ChatGPT, you might believe that such scenarios could be close at hand. However, though ChatGPT is remarkable, we shouldn’t credit it with too much real intelligence. It is not actually a mind – it only tries to suggest text that might appear next.

The success of ChatGPT has brought to the fore a primal fear: that we might bring something to life – and then lose control

It isn’t wondering why you are asking it about curry recipes or the performance of Liverpool Football Club – in fact, it isn’t wondering anything. It doesn’t have any beliefs or desires, nor any purpose other than to predict words. ChatGPT is not going to crawl out of the computer and take over.

That doesn’t mean, of course, that there are no potential dangers in AI. One of the most immediate is that ChatGPT or its like may be used to generate disinformation on an industrial scale to influence forthcoming US and UK elections. We also don’t know the extent to which such systems acquire the countless human biases we all display, and which are likely evident in its training data. The program, after all, is doing its best to predict what we would write – so the large scale adoption of this technology may essentially serve to hold up a mirror to our prejudices. We may not like what we see.

Michael Wooldridge is professor of computer science at the University of Oxford, and author of The Road to Conscious Machines: The Story of AI (Pelican, 2020)

This article was first published in the August 2023 issue of BBC History Magazine