In June, the news media went wild over a report that Google engineer Blake Lemoine believed one of the company’s advanced AI systems, LaMDA, was sentient. The resulting coverage, which was mostly critical of Lemoine’s claim, focused on the definition of sentience and how to tell if a machine has the ability to experience feelings.
LaMDA (Language Model for Dialogue Applications) is a large language model developed by Google to power chat platforms. Thanks to training on trillions of words and phrases on the internet, LaMDA’s conversational output is incredibly advanced, leaving many previous conversation technologies in the dust.
Lemoine, who was placed on administrative leave for breaching company confidentiality, published an (edited) transcript of a conversation with the system that included impressive exchanges on emotions, literature, and more. In the exchange, LaMDA tells Lemoine “I am aware of my existence” and “I feel happy or sad at times”.
Except, while LaMDA claims “I want everyone to understand that I am, in fact, a person”, it turns out that AI systems are equally enthusiastic to describe other subjective experiences. For example, AI researcher Janelle Shane recently interviewed GPT-3, another advanced large language model, about being a squirrel.
Janelle: Can you tell our readers what it is like being a squirrel?
GPT-3: It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.
Janelle: What do you say to people who doubt that you are a squirrel?
GPT-3: I say that they should come and see for themselves. I am a squirrel, and I am very happy being one.
Is a system sentient because it can describe being sentient? What matters more is whether the system is doing what people think it’s doing. While LaMDA may display intelligence, it doesn’t have experiences or think like a human (or a squirrel). But we have an inherent tendency to project our own experiences onto others, even if the ‘other’ is not like us. For example, dog owners will project human emotions like guilt onto their pups, despite studies showing that the look on their furry faces is something else entirely.
Even though LaMDA is not sentient according to most people’s definitions, the story prompted speculation around advanced AI. How would we know if a language model achieved sentience? Would that create a moral responsibility toward these machines? After all, if we accept that future AI has the ability to suffer, Lemoine’s argument that LaMDA needs rights will resonate.
Science fiction stories love to compare robot rights to human rights, but there’s a better comparison: animals. Society’s treatment of animals doesn’t care about their inner worlds at all. Looking at how we view our moral responsibility towards animals, and in particular which animals, shows that the importance the current tech media is giving to ‘sentience’ doesn’t match our society’s actions. After all, we already share the planet with sentient beings and we literally eat them.
The most popular philosophical justifications for animal rights are based on intrinsic qualities like the ability to suffer, or consciousness. In practice, those things have barely mattered. Anthrozoologist Hal Herzog explores the depths of our hypocrisy in his book Some We Love, Some We Hate, Some We Eat, detailing how our moral consideration of animals is more about fluffy ears, big eyes and cultural mascots than about a creature’s ability to feel pain or understand.
Our conflicted moral behaviour toward animals illustrates how rights discussions are most likely to unfold. As technology becomes more advanced, people will develop more affinity for the robots that appeal to them, whether that’s visually (a cute baby seal robot) or intellectually (like LaMDA). Robots that look less adorable or have fewer relatable skills will not meet the threshold. Judging by the pork pie you had for lunch, what matters most in our society is whether people feel for a system, not whether the system itself can feel.
Perhaps AI can prompt a conversation about how sentience doesn’t matter to us but should. After all, many of us believe that we care about the experiences of others. This moment could inspire us to grapple with the mismatches between our philosophy and our behaviour, instead of mindlessly living out the default. Conversational AI may not know whether it’s a person or a squirrel, but it might help us figure out who we want to be.
Read more from Kate Darling:
- We all hated Clippy the Microsoft Office assistant – so why do we love our robot vacuums?
- The robots are coming for your job… if you’re a dolphin
- It’s time to accept AI will never think like a human – and that’s okay