COMMENT
DR KATE DARLING:
CAN ARTIFICIAL AGENTS BE TRUSTED?
AI systems can now generate and use language that feels like a real conversation. But to whose benefit?
I magine being able to chat with an artificial intelligence (AI) about anything. What if you could discuss last night’s game, get relationship advice, or have a deep philosophical debate with the virtual assistant on your kitchen counter? Large language models – AIs capable of this level of communication – being developed by companies such as OpenAI, Google and Meta are advancing quickly enough to make this our new reality. But can we trust what artificial agents have to say?
There are many reasons to embrace conversational AI. The potential uses in health, education, research and commercial spaces are mind-boggling. Virtual characters can’t replace human contact, but mental health research suggests that having something to talk to, whether real or not, can help people. Besides, it’s fun to banter with an artificial agent that can access all of the knowledge on the internet.
But conversational agents also raise ethical issues. Transparency is one: people may want to know whether they’re talking to a human or a machine. This makes sense, but it’s probably also fairly easy to address in contexts where it matters. The bigger problem with these systems is that we trust them more than we should.
As conversational agents become more compelling, will we rely on them for information? We may even grow fond of the artificial characters in our daily lives. There’s a large body of research in human-computer and human-robot interaction which shows that people will reciprocate social gestures, disclose personal information, and are even willing to shift their beliefs or behaviour for an artificial agent. This kind of social trust makes us vulnerable to emotional persuasion and requires careful design, as well as rules to ensure that these systems aren’t used against people.
A chatbot may, for instance, blurt out racist terms, provide false health information, or instruct your child to touch a live electrical plug with a coin – all of which have happened. The newest language models are trained on vast amounts of text found on the internet, including toxic and misleading content. These models also use neural networks, which means that instead of following rules (‘if the input is x, respond y’), they create an output by learning from a mishmash of examples in a way that is harder to understand or control.
“We need to ask who creates and owns large language models, who deploys the devices that use them, and whom this harms or benefits”
As it becomes more difficult to anticipate a language model’s responses, there’s more risk of unintended consequences. For example, if people ask their home assistant how to deal with a medical emergency, invest their money, or whether it’s okay to be gay, the wrong kinds of answers can be harmful.
The information that people may share with their devices is also troubling. When computer scientist Joseph Weizenbaum created a simple chatbot called ELIZA in the 1960s, he was surprised to find that people would give it personal information that they wouldn’t disclose to him, even though he had access to the chat data. Similarly, people may tell artificial agents their secrets, without thinking about whether and how companies may collect and mine that information.
We need to ask who creates and owns large language models, who deploys the devices that use them, and whom this harms or benefits. Personal information that people reveal in conversation can be used for and against them. The idea of having an artificial agent who remembers things about you is appealing, but it could also allow others to manipulate you and your loved ones. One day, your home assistant could alert you to a deal on a car you can’t really afford, at a time when it knows you’re emotionally most likely to buy it. Imagine if it gave you selective political information, or asked for a software upgrade it knows you’re willing to spend money on.
If that sounds dystopian, consider that we’re almost there. Last week, the Amazon Echo in my kitchen tried to sell my kids an ‘extreme fart package’. Advertisers invest massive amounts every year to reach children and teenagers, and recent research I collaborated on with MIT PhD students Anastasia Ostrowski and Daniella DiPaola indicates that kids are confused about the role of companies when an artificial agent advertises products to them – a very near-future consumer protection issue.
It’s wild to think that my kids will not remember a time before we could talk to machines. As we enter this reality, it’s both exciting and concerning to imagine the different paths this era could take. As people begin to forge relationships with, and perhaps even demand rights for, artificial agents, we need to ensure that this technology is designed and used responsibly. After all, humanity deserves protection, too.
DR KATE DARLING
(@grok_) Kate is a research scientist at the MIT Media Lab, studying human-robot interaction. Her book is The New Breed (£20, Penguin).