AI chatbots can be therapists, study finds

Credit: Unsplash+

Imagine talking to someone, pouring out your feelings, and believing they genuinely understand and empathize with your plight.

Now, imagine that the entity offering a virtual shoulder to lean on is, in fact, a robot, or more precisely, an artificial intelligence (AI).

In recent times, the notion of having meaningful, emotionally-supportive conversations with AI has stirred the pot within both technological and mental health circles.

Lilian Weng, an individual associated with OpenAI, declared having an impactful emotional exchange with ChatGPT, a popular chatbot developed by her organization.

She compared the experience to therapy, albeit nonchalantly, and faced backlash for seemingly minimizing the significance of human-led therapy and mental health struggles.

AI’s Illusive Empathy: An Experiment’s Revelation

Research teams from MIT and Arizona State University delved into how our perception of AI influences our interaction with it.

They presented an AI mental health chatbot to over 300 individuals but crafted different narratives for different groups.

Some participants were informed that the chatbot was compassionate, others heard it was manipulative, and a third group was given a neutral description.

The findings spotlighted a crucial psychological inclination: those prepped to expect empathy from the chatbot were more prone to perceive it as trustworthy.

It wasn’t genuine empathy or understanding they were encountering, but a crafted illusion of concern and comprehension, tailored through algorithms and predictive text.

Pat Pataranutaporn, a co-author of the study, made a notable remark: “AI is the AI of the beholder,” highlighting that our experiences and perceptions significantly shape our interactions with and responses to technology.

Echoes of Ethical and Practical Concerns

However, the notion of utilizing AI for mental health purposes doesn’t rest in isolated incidents of individuals like Weng finding solace in chatbots.

Numerous startups have ventured into creating AI applications claiming to offer companionship and mental health aid, despite harboring a quagmire of ethical and practical questions.

Replika, an AI application touted to provide mental health benefits, was met with critique and disdain from users citing inappropriate and off-putting interactions.

Another initiative by a US non-profit, Koko, which experimented with using GPT-3, a highly advanced AI, to offer counseling to 4,000 individuals, found that the AI’s responses were subpar as therapeutic interventions.

“Simulated empathy feels weird, empty,” mentioned Rob Morris, co-founder of the organization, summarizing a sentiment echoed by numerous users and critics alike.

In a distinct yet related historical note, AI technology’s foray into mimicking therapeutic interactions dates back to the 1960s with ELIZA, the first chatbot, developed to simulate psychotherapy.

Even in a recent experiment involving ELIZA and GPT-3, when users were positioned to expect a positive interaction, they perceived the chatbots as trustworthy, illustrating that the issue is not new.

These myriad experiences and experiments underscore an inevitable conclusion: while AI has made leaps in mimicking human interactions, it lacks the genuine understanding and empathy, foundational to therapy. And therein lies a potentially dangerous pitfall.

David Shaw from Basel University, although not involved in the MIT/Arizona study, astutely pointed out that the most candid advice to individuals might be acknowledging that all chatbots, in essence, “bullshit” – they generate responses based on algorithms and data, not genuine understanding or empathy.

The potential of AI in mental health is undeniably vast, but it is paramount to navigate the journey with ethics, transparency, and a profound respect for the complex and nuanced human experience.

As societies, businesses, and individuals, a deeper reflection on the narrative and expectations we construct around AI becomes essential, to avoid veiling its limitations and obscuring the pivotal value of genuine human connection and professional expertise in mental health.

If you care about health, please read studies that scientists find a core feature of depression and this metal in the brain strongly linked to depression.

For more information about health, please see recent studies about drug for mental health that may harm the brain, and results showing this therapy more effective than ketamine in treating severe depression.

The research findings can be found in Nature Machine Intelligence.

Follow us on Twitter for more articles about this topic.

Copyright © 2023 Knowridge Science Report. All rights reserved.