AI chatbots could provide easier access to mental health services but we don’t yet know whether the risks are worth the benefits.
Years ago, my therapist recommended I read a dog-training book, telling me that “the same principles work for humans.” I thanked her, but said it felt too condescending to train my husband like a dog. “No,” she laughed, “the book is to help you train yourself.” If therapists refer their patients to generalisable frameworks (in this case, even to dogs), couldn’t an AI bot function as a therapist by giving the same advice? The short answer is yes, but we don’t know at what risk.
Finding a therapist is hard. The pandemic saw stark increases in depression and anxiety, leading to a global shortage of mental health professionals. Therapy can be expensive, and even for those who can afford it, seeking help requires the effort of reaching out, making time, and scheduling around another person. Enter therapy bots: an alternative that eliminates almost all of the overhead.
Woebot and other therapy chatbots like Wysa and Youper are rising in popularity. These 24/7 couch friends draw on methods like Cognitive Behavioural Therapy, which has a specific structure and well-established exercises. The premise makes sense, and human-computer interaction research shows that people can develop a rapport, personal relationship, and trust in a chatbot. They might even trust it more than people, out of fear that a person would judge them, for example.
But while the existing bots use established therapy frameworks, their effectiveness may depend on how the user engages with them, which is easier for a human professional to guide. To date, there’s very little research that indicates whether therapy bots work, whether they’re good or bad for people, and also for whom.
Woebot came under fire in 2018 for unwittingly endorsing child sexual exploitation. That issue was addressed, but it won’t be the last. Newer generative AI methods could make a bot’s responses feel less canned, but still have the problem that nobody can predict exactly what the bot might say, which is particularly risky in a therapy context. AI-based text systems are notorious for baked-in sexism, racism, and false information.
Even with pre-scripted, rule-based answers, it’s easy to cause harm to those seeking mental health advice, many of whom are vulnerable or fragile. While the bots are designed, for example, to recognise suicidal language and refer out to human help, there are many other situations where a bot’s answer might be misguided, or taken the wrong way.
Good therapists are skilled at knowing when and how (and how hard) to push someone in a certain direction. They read between the lines, they observe gestures, they notice changes in tone, all of which helps inform their responses. They work to strike a difficult balance between meeting their patient where they’re at, and also moving them forward. It’s such a difficult skill that even human therapists make missteps.
Bad human therapists are undoubtedly harmful. The profession has seen everything from unsafe advice, to therapists scamming their clients out of their life fortunes. But it has also been geared toward preventing harm, with ethics codes, licence requirements, and other safeguards. Entrusting the sensitive data collected in a mental health context to a person is different from entrusting it to a company. Human therapists may make mistakes, but they aren’t risky at scale. And the promise of these therapy bots is exactly that: scale.
The big selling point is increasing access to therapy, and it’s a compelling one. Lowering the barrier to mental health services is undoubtedly valuable, but we don’t yet know whether the risks are worth the benefits. In the meantime, there are ways to support people without trying to recreate human therapists.
Ironically, a better solution may be simpler technology. In the 1970s, Joseph Weizenbaum created a chatbot named ELIZA that mostly responded to users with simple questions. Traditional journalling, a technique recommended by many therapists, is made more accessible to people through interactive formats like ELIZA. There are also mood-tracking and meditation apps that support people on their mental health journeys.
It’s possible that therapy bots can be a huge help to people. But we should be wary of any products rushing to market with insufficient research, and especially AI-powered apps that may incorporate all manner of known and unknown harms. This week, when I asked my therapist what she thought about the bots, her main concern was simple: don’t trust anyone who is in it for the money.
Read more about mental health:
- How the neuroscience of uncertainty can help you make better decisions
- Red flags: Is there any science to spotting a toxic relationship?
- Intrusive thoughts: Why they happen and how to deal with them