By Dr Kate Darling

Published: Friday, 24 June 2022 at 12:00 am


In 2019, an MIT graduate student Daniella DiPaola and I began to frequent our local grocery store, and not to shop for food. The store had introduced a robot named Marty in some of its branches that we wanted to see in action.

The 1.9m tall, grey machine roamed the aisles with its wide base, scanning the floor for spills, and paging the employees to clean up fall hazards. But what interested us most was that the robot, despite its large googly eyes and friendly name, was notably unpopular with customers.

As robots come into shared spaces, people tend to have strong positive or negative reactions, often taking engineers by surprise. But the key to designing automated systems may be simple: recognising that people treat robots like they’re alive.

Even though robots have been building cars in factories for a while, we’ve seen a more recent wave of deployments in areas where they interact with people. From robot vacuums to food delivery bots to autonomous ground patrol, robots are increasingly entering our workplaces, homes, and public spaces.

Part of this is due to advances in machine vision that have made it easier (albeit still challenging) for robots to navigate complex infrastructure and unexpected occurrences, whether that’s snowflakes, a stray dog, or MIT researchers dropping grocery items in front of a store robot to see what happens.

Robot engineers and designers have worked so diligently to make functional pieces of technology that they are often taken aback by an additional component of robot deployment: people’s reactions.

In some cases, the response is overwhelmingly positive, with people adopting the robots as friends or coworkers, giving them hugs, silly hats, and promotions. Over 80 per cent of Roombas, a home robot vacuum made by iRobot, have names. The company was astonished to discover that some customers would send in their vacuum for repair and turn down the offer of a brand-new replacement, requesting that ‘Meryl Sweep’ be sent back to them.

Are the people who socialise with robots watching too many science fiction movies? According to a few decades of research on how people interact with computers and robots, our response to these devices is about more than just pop culture.

People subconsciously treat automated technology like it’s alive, falling into social patterns like politeness, reciprocity, and empathy. Stanford professor Clifford Nass, a pioneer in human-computer interaction, demonstrated that people will argue with computers, form bonds with them, and even lie to them to protect their ‘feelings’.

""
Marty the supermarket robot © Getty Images

While this works in some robots’ favour, the store robot DiPaola and I observed got the opposite treatment. When DiPaola first noticed people complaining about it on Facebook, we wondered whether the backlash was about robots taking jobs (a legitimate concern voiced by some of the employees.)

But when we surveyed shoppers, they had different gripes. Many of them said the robot was “creepy”, watching them, following them, or getting in their way. DiPaola did a sentiment analysis on Twitter, measuring positive and negative mentions of the robot. The biggest surge in negative mentions happened when the grocery store held a birthday party for the robot, complete with cake and balloons for customers.

Free cake is a strange thing for people to get upset about, but it illustrates that, sometimes, ‘humanising’ a robot can backfire. In fact, adorning this tall grey robot with eyes and a friendly name was reminiscent of another unpopular character: Microsoft’s animated Office assistant, Clippy.

Here’s how Clifford Nass explained people’s violent dislike for the virtual paperclip: “If you think about people’s interaction with Clippy as a social relationship, how would you assess Clippy’s behaviour? Abysmal, that’s how. He is utterly clueless and oblivious to the appropriate ways to treat people… If you think of Clippy as a person, of course he would evoke hatred and scorn.”

If we apply this human-computer interaction principle to robots, the reason people love some and hate others is because of social expectations. This means that, when done incorrectly, lifelike features can have a Clippy-effect, generating more adversity than a different tool performing the same task.

At the same time, robots that harness our social expectations are extremely likeable, which is why some roboticists are starting to team up with film animators to more intentionally design appealing machines.

The blunders happen when robot developers focus so thoroughly on the technology itself that they miss considering the human interaction element. Integrating robots into shared spaces requires understanding that successful engineering is only one piece of the puzzle, and that our social behaviour as humans matters at least as much as the AI.

Read more from Dr Kate Darling: