COMMENT

DR KATE DARLING:

WHY WE LOVE SOME ROBOTS AND HATE OTHERS

Not all artificial intelligence is equal: just ask Clippy, Microsoft’s much reviled virtual assistant

Back in 2019, MIT graduate student Daniella DiPaola and I began to frequent our local grocery store, and not to shop for food. The store had introduced a robot that we wanted to see in action. The 1.9m-tall machine roamed the aisles, scanning the floor for spills and paging the employees to clean up hazards. But what interested us most was that, despite its large googly eyes and friendly name, Marty the robot was unpopular with customers.

As robots come into shared spaces, people tend to have strong positive or negative reactions, often taking engineers by surprise. But the key to designing automated systems may be simple: recognising that people treat robots as if they’re alive.

Even though robots have been building cars in factories for a while, we’ve seen a more recent wave of deployments in areas where they interact with people. Whether they’re doing the hoovering or delivering food, robots are increasingly entering our workplaces, homes and public spaces.

Part of this is due to advances in machine vision that have made it easier for robots to navigate complex infrastructure and deal with unexpected occurrences, such as MIT researchers dropping groceries in front of them to see what happens. Robot engineers have worked so diligently to make functional pieces of technology that they’re often taken aback by an additional component of robot deployment: people’s reactions.

In some cases, the response is overwhelmingly positive, with people adopting the robots as friends or co-workers, giving them promotions, hugs and silly hats.

Over 80 per cent of Roombas, the robot vacuum made by iRobot, have names. The company was astonished to discover that some customers would send in their vacuum for repair and reject the offer of a brand-new replacement, requesting that ‘Meryl Sweep’ be sent back to them.

Are these people watching too many sci-fi movies?

According to a few decades of research on how people interact with computers and robots, our response to these devices is about more than just pop culture.

People subconsciously treat automated technology like it’s alive, falling into social patterns like politeness, reciprocity and empathy. Stanford professor Clifford Nass, a pioneer in human-computer interaction, demonstrated that people will argue with computers, form bonds with them and even lie to them to protect their feelings.

“The biggest surge in negative comments came when the store held a birthday party for Marty the robot, complete with cake and balloons”

While this works in some robots’ favour, Marty, the robot DiPaola and I observed, got the opposite treatment.

When DiPaola first noticed people complaining about it on Facebook, we wondered whether the backlash was about robots taking jobs (a legitimate concern voiced by some of the employees). But when we surveyed shoppers, they had different gripes. Many said the robot was creepy, because it watched and followed them, or got in their way. DiPaola did a sentiment analysis on Twitter, measuring positive and negative mentions of the robot. The biggest surge in negative mentions came when the store held a birthday party for Marty, complete with cake and balloons for customers.

Free cake is a strange thing for people to get upset about, but it illustrates how attempts to ‘humanise’ a robot can backfire. It was reminiscent of another unsuccessful attempt: Microsoft’s animated Office assistant, Clippy.

Here’s how Nass explained people’s violent dislike for the virtual paperclip: “If you think about people’s interaction with Clippy as a social relationship, how would you assess Clippy’s behaviour? Abysmal, that’s how. He’s utterly clueless and oblivious to the appropriate ways to treat people … If you think of Clippy as a person, of course he would evoke hatred and scorn.”

If we apply this human-computer interaction principle to robots, the reason people love some and hate others is because of social expectations. This means that, when done incorrectly, lifelike features can have a Clippy-effect, generating more adversity than a different tool performing the same task. Similarly, robots that harness our social expectations are extremely likeable, which is why some roboticists are teaming up with film animators to design appealing machines.

The blunders happen when robot developers focus so thoroughly on the technology that they forget the human interaction element. Integrating robots into shared spaces requires the understanding that successful engineering is only one piece of the puzzle, and that our social behaviour as humans matters at least as much as the AI.

DR KATE DARLING

(@grok_) Kate is a researcher at MIT, where she investigates technology and society, and studies human-robot interaction.