In December, Engineered Arts, a robotics company based in Cornwall, took Twitter by storm with a video of one its latest creations, a humanoid robot named Ameca. The video was shared by thousands of people on Twitter, who were disturbed and amazed in equal measure by its human appearance and lifelike expressions.
We spoke to Will Jackson, CEO of Engineered Arts, about how Ameca was made, what the robot will be used for, and whether he finds his own creation unsettling.
A lot of people saw the video of Ameca online, but what is the robot for?
With Ameca, we wanted to create an intuitive and straightforward way to communicate with a machine. It’s essentially a humanoid designed as a platform for AI.
There are a lot of people working on software for human interaction right now, things like facial recognition, expression and estimation, and then there’s things like gesture recognition, speech recognition and generated text-to-speech. While there are lots of people working on the software, there’s very little hardware. If you want people to really interact with an AI, a screen and keyboard isn’t going to cut it.
We wanted to build a machine that, if you smile, knows you’re happy and if you frown, knows you disapprove. You don’t even have to speak to communicate; a nod of the head, a wink, or a smile is worth a thousand words. These were the kind of interactions that we wanted to explore with Ameca.
How does Ameca work?
On a physical level, there are loads of motors, but it’s quite a novel design. We use a lot of ball-screw actuators that approximate human muscles reasonably well. You’ll notice in the videos that the movement is very fluid because we’ve spent a lot of time getting that right.
On the software side, we have a complete software stack that has everything from motor control all the way up through to AI functions like face and speech recognition.
That said, Ameca is not a sentient robot. People tend to project their own idea of what a robot is: if they see it behave in a certain way, they make assumptions because those are human-type behaviours that are usually driven by a human level of sentience, but that’s absolutely not true.
What you’re looking at is some code executing, and some of it can be quite simple. But the illusion can be engaging and quite powerful.
Do the robots ever feel creepy to you?
Yeah, you get caught off-guard. They do things that you don’t quite expect. They’re programmed to make eye contact, and sometimes when they give you a funny look, you can’t help attributing some sort of thought process to that.
We’ve left a lot of the mechanics exposed with Ameca, and that was an attempt to get away from what’s called the Uncanny Valley. That’s a kind of graph of robot appearance vs acceptability. If you get too close to being human, the graph dips into the valley, which means it’s creepy: you don’t like it because it’s close to resembling a person, but not quite perfect.
Boston Dynamics [a US-based robotics company] has the same thing with its robot dog. It has no head and is obviously mechanical, but it moves just like a dog. There was one video where somebody kicks the robot. There’s something deep down inside us that recognises when something’s alive from the way it moves, and it’s really hard to override that association of biological motion equalling a living thing.
Both you and Boston Dynamics often go viral with your videos. What do you think fascinates people so much about them?
For Boston Dynamics, their work is about a robot getting from point A to point B. We’re more focused on human interaction. However, when their machines start to move like living things, they fall into the Uncanny Valley. I think that’s what takes the videos viral.
There’s always that Terminator scenario that people imagine, thinking this will spell the end of the world, but I wouldn’t worry too much about that. If an AI wanted to destroy us, it would not send a humanoid robot. It would just detonate some warheads, it would be a lot quicker to wipe us out than chasing us around with guns – that scenario just makes a good movie.
It’s about this vision of self. It’s like looking at yourself in the mirror and seeing this machine and wondering what’s different between the two of you. Seeing something that moves and behaves like a human, I think, scratches at that and that’s what makes these videos go viral.
For the people concerned about robots, what would you say to reassure them?
There are serious concerns around AI, but AI is not robots, so worry about the software and what you put in control of it. I wouldn’t ever put a weapon system in the control of AI, that’s a really terrible idea.
Do the military use humanoid robots in active service? Not as far as I know. Do they use drones to drop bombs? Yes, so worry about that. It’s not something we’re interested in as a company either, we’ve never actually done defence work.
The analogy I like to make is of C-3PO in the original Star Wars film. It’s a friendly robot that’s there as a translator, basically, for entertainment. When the two robots, R2-D2 and C-3PO, are captured and being resold, nobody wants to buy C-3PO because it’s regarded as a useless robot. Everybody wants R2-D2, the little tin can thing with a dome head because he’s seen as useful utility.
- This article first appeared in issue 373 of BBC Science Focus Magazine – find out how to subscribe here
Read more about the Uncanny Valley:
- Lip-syncing robot gets one step closer to crossing the Uncanny Valley
- Realistic animations can help to beat the ‘Uncanny Valley’ effect in monkeys
- Source of robot creepiness in ‘Uncanny Valley’ identified