By Dr Julia Shaw

Published: Thursday, 04 August 2022 at 12:00 am


Ask a child a question, and you will quickly learn that many of their answers are quite scattered. You ask how their day was, and they respond with a story about how much they like turtles. Try again, perhaps more specific this time, and you might get a curt “yes”. Children respond to questions in such literal and lateral ways that there is a whole section of parenting literature that deals with teaching children how to answer questions. But what about asking the right questions?

When you are concerned that something bad has happened, those hard-to-get answers become all the more frustrating. They can become a problem for safeguarding children or investigating a crime. Children aren’t tiny adults, and this means that teachers, social workers, judges, police, paediatricians and psychologists can all benefit from training on how to interview children.

The issue is that training people how to question vulnerable children is no easy task. Interviewing skills are all about learning the evidence-based guidelines, then getting the opportunity to practise and feedback. It’s unethical to train on actual children, so courses improvise. Some training courses hire actors to play children. But others are turning to a new approach.

In place of a well-meaning adult pretending to be a child, a simulation could be the next best thing. Doctors use them to train for surgeries, F1 drivers use them whenever they’re away from a track. Now, researchers have created digital children where child lookalikes are programmed to have ‘memories’ and answer questions like real children do. Your task is to figure out if something bad happened to the virtual children.

What exactly these virtual children look like depends on the team. Some are realistic, made to look, move and sound as close to real children as possible by using realistic game characters or deepfakes. But this has led to problems with the ‘uncanny valley’, which is when something creeps us out because it looks almost real but not close enough to be convincing. This has led some researchers to create virtual children that look and sound like game characters instead.

The first vulnerable child avatars were created by a team of Finnish and Italian researchers. In a study the team published in 2014, people interviewed the digital children and the goal was to find out whether they had a ‘memory’ of something bad. Like in real life, only some of the avatar children were programmed to have ‘experienced’ something bad.

The researchers accounted for how suggestible children are. For example, if someone asked one of the avatar children the same suggestive question three times, the third time the child would change their answer from ‘no’ to ‘yes’. This is to mimic when children say something that is untrue because of how they have been questioned.

People who were given feedback after each interview got better at asking open questions in a short period of time. They also came to more correct conclusions about whether there were safeguarding concerns. These promising results prompted other teams to try to make their own virtual children.

One German team created avatars in virtual reality classrooms. In a VR context it feels like you are sitting across from the child, rather than looking at them on a screen, which can make the interaction more realistic. The virtual children’s ‘memories’ are also based on real cases.

Children need to trust you. They also require reassurance, or they might suddenly clam up. This is something the German researchers have built into their simulations. For the VR children to tell you what happened, you need to start with neutral or positive topics. This is a technique that most people who work with children will know as a way to build rapport. You are also more likely to get useful details if you make reassuring comments.

The newest project to date is by an international team of computer scientists and psychologists. They want to create a standalone AI child. In June 2022, the team explained that they synthesised a talking child avatar from 1,000 transcripts of mock interviews using actors trained to behave like children.

This is just for the initial testing, as they plan to train the AI on interviews with actual children soon. It’ll be interesting to see what happens when human coders are taken out of the picture. Will an AI child be able to sufficiently mimic a real one?

All this research is so fresh that the tools are not available in mainstream settings. But results so far are promising. Virtual children have the potential to revolutionise investigative interview training. That is, once researchers figure out how to make the avatars less creepy…

Read more from Dr Julia Shaw: