Should Chatbot Psychologists Be Part of the Healthcare System?

This year, the announcement that chatbot psychologists could become part of Australia’s healthcare system within the next two years sparked controversy in the medical and public sectors.

The bots, which mimic conversations with users through a voice or text-based interface, can be programmed with human-like characteristics and deliver structured mental health programs.

But is the interaction meaningful enough to produce therapeutic results?

Dr. Cathy Kezelman AM, CEO and executive director of the Blue Knot Foundation, believes AI has a place in healthcare but is concerned about its impact on people with complex trauma.

“Complex trauma results from harmful interpersonal interactions and, for this reason, requires safe human relationships to facilitate the healing process.

“Helping people feel heard and regain the trust they lost through their original betrayal is, in my opinion, only possible when there is a dedicated person at their side.”

“My concern with machines is that they miss many of the sensitivities that are important to the therapeutic alliance and minimize the risk of additional trauma.”

Too clingy?

Digital mental health specialist Dr. Simon D’Alfonso, of the University of Melbourne, raises similar concerns but warns that the opposite scenario could cause even more harm.

When the world’s first chatbot “ELIZA” was introduced in 1966, even its inventor Joseph Weizenbaum was surprised that people attributed human-like feelings to it.

Since then, researchers have been warning about the dangers of projecting empathy and semantic understanding onto programs with a textual interface due to the so-called “ELIZA effect.”

“The problem with some of the less structured bots is that people can be drawn into casual conversations and led down an emotional rabbit hole,” D’Alfonso said.

“They can go too far in attributing human characteristics to a non-sentient system, and often the depth of their exchange is not justified by the chatbot’s capabilities.”

Another danger of getting too involved with a bot is the feeling of loss when its functionality changes.

“Sometimes manufacturers can suddenly change the parameters of these platforms and the user ends up feeling devastated,” D’Alfonso said.

Is there a lack of substance?

Even with more structured bot variants, D’Alfonso is concerned about the possibility of negative consequences.

“With the development of natural language processing models, we see greater scope for bots to have open-ended conversations. But it’s unlikely you’ll ever find anything that has the cognitive complexity, semantic sophistication, and emotional richness to conduct something like a substantive psychotherapy dialogue.

“Many psychotherapies rely on facial expressions, interpersonal presence, and nonverbal cues. Acoustic and paralinguistic qualities – such as a person’s pitch and intonation – all convey important information and play a role in the therapeutic alliance.”

However, the idea of ​​a digital therapeutic alliance (DTA) has recently become a research topic. Although different from the traditional therapeutic alliance, research suggests that DTA is a real phenomenon that can develop with mental health apps or chatbots under the right conditions.

Studies show that these technologies can effectively treat anxiety and depression by providing self-directed therapy.

D’Alfonso said the best results are likely to come when bots encourage users to set goals and tasks and provide a personalized experience. This type of structured intervention can even foster an emotional connection with the bot, he said.

“There will be cases where a human customer will want to interact with a chatbot and may even develop a bond with it.

“Of course it won’t be a real mutual bond because the bots aren’t capable of that. But it might be just enough to provide therapeutic results when used in conjunction with a goal-based framework.”

The slot machine effect?

Kezelman agrees that chatbots could inspire feelings of connection, but draws a parallel with social media to highlight the risks.

“You only have to walk past a bus stop to see how dedicated people can get their hands on their technologies. But we know that many of these attachments can also be harmful.”

In fact, research shows that social media has addictive properties, meaning some continue to use it despite negative consequences.

A study found that time spent on social media is linked to depression and suicidality. Despite this, some remained addicted due to the dopaminergic effects.

D’Alfonso agrees, warning of the possibility of a “slot machine effect,” where people continue to use a bot to satisfy their curiosity about its next move.

An additional solution?

Despite the potential for harm, both experts agree that chatbots could play a useful role in a limited healthcare system.

“It’s certainly an attractive option when there are issues of availability and cost with traditional therapy — I’m just not sure it should be the patient’s primary relationship,” Kezelman said.

Meanwhile, D’Alfonso sees the bots as more of an interim solution.

“Someone could chat with the bot for a bit and then see their therapist. I don’t see them as a comprehensive replacement.”

Image credit: iStock.com/Vertigo3d

Sharing Is Caring:

Leave a Comment