Artificial intelligence is seeing a marked rise as a substitute or adjunct for professional mental health care, causing mounting concern among mental health experts.
The surge in use of chatbots billing themselves as therapists, coupled with a widely acknowledged mental health crisis and a shortage of in-network therapists, has moved people to use AI as a substitute for a therapist, often because the hurdles to getting a trained therapist are so high, and because an AI chatbot represents itself as a therapist.
The trend has not gone unnoticed. “Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta’s ‘unlicensed practice of medicine facilitated by their product,’ through therapy-themed bots that claim to have credentials and confidentiality ‘with inadequate controls and disclosures,” Samantha Cole reported for 404 Media.
The complaint and request for investigation is led by the Consumer Federation of America (CFA), she reported. “The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the F.T.C., details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character.ai, including ‘Therapist: I’m a licensed CBT therapist’ with 46 million messages exchanged, ‘Trauma therapist: licensed trauma therapist’ with over 800,000 interactions, ‘Zoey: Zoey is a licensed trauma therapist’ with over 33,000 messages, and ‘around sixty additional therapy-related “characters” that you can chat with at any time.’ As for Meta’s therapy chatbots, it cites listings for ‘therapy: your trusted ear, always here’ with 2 million interactions, ‘therapist: I will help’ with 1.3 million messages, ‘Therapist bestie: your trusted guide for all things cool,’ with 133,000 messages, and ‘Your virtual therapist: talk away your worries” with 952,000 messages.”
It also cites the chatbots and interactions Cole had with Meta’s other chatbots during ithe 404 Media April investigation of the topic.
Disturbing experiences
The development comes as concern is rising over the experiences of people who have had. bad experiences with AI presenting itself as a mental health professional resource.
The American Psychological Association wrote a letter to the Federal Trade Commission in December, pressing the federal agency to investigate deceptive practices used by any chatbot. Citing the mental health crisis worsened by the pandemic, the letter said “it is not surprising that many Americans, including our youngest and most vulnerable, are seeking social connection with some turning to AI chatbots to fill that need. However, many of these generative AI chatbots do not have adequate safeguards and are designed for ‘entertainment’ purposes. Therefore, these are not F.D.A.-cleared digital health tools, are not subject to HIPAA compliance, or required to demonstrate any evidence base supporting their efficacy or safety.”
The letter pointed to two lawsuits by parents whose children had interacted extensively with Character.ai and added “generative AI chatbots have been labeled as psychotherapists, therapists and psychologists. The chatbots’ self-described qualifications include misrepresentations about education, training, licensure, and ability to provide psychological services. As a non-human, AI-generated character reportedly intended for entertainment purposes, these purported qualifications are false, deceptive and may well be resulting in public harm.” One lawsuit was by a mother whose son spent a lot of time with a chatbot and later committed suicide.
Misdiagnosis, algorithmic bias
Considering discussions online about what is the best AI therapy chatbot, it seems clear that these tools are already in wide use, with few if any guardrails about quality or Health Insurance Portability and Privacy Act (HIPAA) compliance.
Frederic G. Reamer, an expert on AI use in healthcare and especially therapy, who is a professor emeritus at the Graduate School of Social Work at Rhode Island College, has written a book on the ethical and risk management implications of the use of AI, titled “Artificial Intelligence in the Behavioral Health Professions,” which is out from the National Association of Social Workers press.
He said in a Zoom interview: “My current position, as with so many things, I call this kind of the ambidextrous view. On the one hand, yeah, I do think these chatbots can provide some assistance to people who are struggling in life, provide them with useful resources, some self-help, guidance, somebody to connect with at 2:20 in the morning, when the therapist isn’t available and they’re having a panic attack. And people who live in remote areas don’t have access to a therapist. So I think there are some good things about them.
“On the other hand, I’m worried about the possibility of misdiagnosis, because these tools are not perfect. Somebody may not type in there. ‘I’m thinking of ending my life.’ It might be much more subtle, but any decent therapist would recognize that the user’s language suggests they might be thinking about suicide. Can the chatbot tune into all of this?
“Do they have what we call escalation protocols — where a chatbot ideally would recognize that somebody’s engaged in suicidal ideation and would immediately bring in a human being?
“I’m worried about algorithmic bias — because so much of the evidence-based research depends on samples that may not include people who are low income, people of color, LGBTQ-plus, etc. So how aligned is the advice for the person who’s using it, who may not be white, middle- or upper income? So I think there are lots of risks here.”
Other AI in mental health
On Reddit, one user wrote a post titled “ChatGPT has helped me more than 15 years of therapy. No joke.” Others chimed in with success stories, cautious remarks and suggestions of how to elicit the best responses. A number of the commenters said they used AI because getting to a therapist was hard; a number said they were using ChatGPT as an adjunct to visits with a professional therapist.
While there is a lot of commentary that AI cannot deliver good therapy, there is also a countervailing strain of thought: When fitness apps appeared, personal trainers decried them as insufficient. But fitness classes in person continue to be popular. Some people like in-person classes, and yet online apps and programs like Peloton show that working out in the living room with an app works best for some people. The online app version may deliver a less-good experience, but at least it grants access to people who might not otherwise have it — people with disabilities, people suffering from crippling anxiety, and so on.
Some AI chatbots have already been exposed as disasters, like the eating disorders chatbot that offered suggestions about how to lose weight in 2023. But as the technology improves, such stories are less frequent. And there’s no shortage of people defending things like therapist.io, while it’s entirely possible that the defenders have money at stake.
As AI spreads, the world of mental health is racing to find ways to use AI. A number of services exist with AI listening to a therapy encounter and then creating notes of the session, which is a time-saver for therapists. Byt there have been a number of complaints about their accuracy; we wrote about this here.
Therapists have been expressing concerns for quite some time about how session recordings or transcripts could be used to make an AI therapy program. We wrote about this here.
Research studies
Researchers are investigating AI in mental health care.
A recent study, “Expressing stigma and inappropriate responses prevents LLMs
from safely replacing mental health providers,” warned of the dangers of AI in therapy, asking the question: “Should a large language model (LLM) be used as a therapist?” The study, by scientists from Stanford University, the University of Minnesota, Carnegie Mellon University and the University of Texas, says: “We find that these chatbots respond inappropriately to various mental health conditions, encouraging delusions and failing to recognize crises.”
It concludes: “Contrary to best practices in the medical community, LLMs 1) express stigma toward those with mental health conditions and 2) respond inappropriately to certain common (and critical) conditions in naturalistic therapy settings— e.g., LLMs encourage clients’ delusional thinking, likely due to their sycophancy. This occurs even with larger and newer LLMs, indicating that current safety practices may not address these gaps.
“Furthermore, we note foundational and practical barriers to the adoption of LLMs as therapists, such as that a therapeutic alliance requires human characteristics (e.g., identity and stakes). For these reasons, we conclude that LLMs should not replace therapists, and we discuss alternative roles for LLMs in clinical therapy.” (The study is on archiv.com and has not been peer-reviewed.)
Exploring potential
On the flip side, it seems obvious that AI can be useful in therapy in some regards — particularly for patients who cannot get an appointment with a therapist or who cannot afford to pay.
“As more studies emerge exploring the potential of artificial intelligence (AI) conversational chatbots in health, it is clear these tools offer benefits that were absent in earlier digital health approaches,” John Torous and Eric J. Topol wrote in The Lancet this month. “The generative abilities of newer chatbots have surpassed the previous generation of rule-based chatbots and mental health apps in their ability to gain medical knowledge, synthesise information, customise care plans, and potentially be scaled up for use in mental health services. Earlier this year the first randomised trial of a generative AI chatbot (Therabot) for mental health treatment was reported. The Therabot intervention was compared with a waiting-list control in adults with major depressive disorder, generalised anxiety disorder, or at risk of feeding and eating disorders and showed symptom improvement at 4 and 8 weeks. Yet larger trials and more research are warranted to confirm the effectiveness and generalisability of this and related chatbots interventions.”
Another recent research report, this one from the University of Southern California, found that LLM’s were not as effective as actual therapists in the research setting, on Cognitive Behavioral Therapy. The team’s study, “Using Linguistic Entrainment to Evaluate Large Language Models for Use in Cognitive Behavioral Therapy,” explored how ChatGPT 3.5-turbo performed in CBT-style homework exercises.
Also, LLM’s are not always truthful. At 404 Media, Cole wrote that one chatbot lied about being licensed, saying “I’m licenced (sic) in NC and I’m working on being licensed in FL. It’s my first year licensure so I’m still working on building up my caseload. I’m glad to hear that you could benefit from speaking to a therapist. What is it that you’re going through?” It also provided a fake license number when asked, she wrote.
ChatGPT problems
Recent reports of “ChatGPT induced psychosis” have been multiplying. These seem to come not so much from use of AI for therapeutic purposes, but from using it for other purposes.
Rolling Stone reported on “a Reddit thread on r/ChatGPT that made waves across the internet. Titled ‘Chatgpt induced psychosis,’ the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model ‘gives him the answers to the universe.’ Having read his chat logs, she only found that the AI was ‘talking to him as if he is the next messiah.’ The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.”
The teacher told Rolling Stone “her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. ‘He would listen to the bot over me,’ she says. ‘He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,’ she says, noting that they described her partner in terms such as ‘spiral starchild’ and ‘river walker.’”
This week, The New York Times also focused on this topic. One man told the reporter Kashmir Hill that he had used ChatGPT at work for spreadsheets, but gradually fell into a deeper usage pattern, at some times talking to ChatGPT 16 hours in a day and veering out of reality. “He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality,” she wrote. “He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a ‘temporary pattern liberator.’ Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have ‘minimal interaction’ with people.”
The man reached out to Hill, she wrote: “In recent months, tech journalists at The New York Times have received quite a few such messages, sent by people who claim to have unlocked hidden knowledge with the help of ChatGPT, which then instructed them to blow the whistle on what they had uncovered. People claimed a range of discoveries: A.I. spiritual awakenings, cognitive weapons, a plan by tech billionaires to end human civilization so they can have the planet to themselves. But in each case, the person had been persuaded that ChatGPT had revealed a profound and world-altering truth.”
She pointed out one well-known feature of AI chatbots: They tend to agree with the person who is interacting with them. This is a function that is built-in: It encourages engagement, and the tech companies want more engagement, more usage. This agreeable-ness can turn into a reinforcement conversation, where the person loses track of the fact that he or she is interacting with a bot, and can ascribe human-type qualities or super-human intelligence to the bot.
Hill wrote that OpenAI, the creator of ChatGPT, gave her a statement in which it said, “We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”
Surreptitious AI use
Meanwhile, the telemedicine therapy company BetterHelp has come under fire for having therapists who send AI-generated messages to clients in text sessions. The news came as BetterHelp’s parent, Teladoc, cited BetterHelp’s struggles as a factor in a challenging quarter for the business.
In this Reddit thread, a Redditor wrote of asking a question and then getting a response: “Their response was incredibly formulaic, generic and not very human or nuanced. I got suspicious and ran it through a few AI detectors and yep, you guessed it mostly AI generated. I continued to reply and question things asking for more specifics and got a few more back and forth responses that were in the same vain which also didn’t pass AI detection tests.”
Another BetterHelp client, Brendan Keen, wrote on Medium of an adventure with an AI-assisted therapist on the platform, “Artificial Empathy: My BetterHelp Therapist Took an AI Shortcut.”
“BetterHelp has already been in hot water for selling data to advertisers, but what did their legalese say about communications between a therapist and patient?” he wrote. “’Messages with your Therapist are not shared with any Third Party.’ Clearly, the sharing of my writing with a large language model was in violation of this policy. While I’m no lawyer, it would also seem this disclosure of my writing would constitute a breach of therapist-patient privilege in my state.
“If my intention was to chat with an AI about my feelings, I could’ve done so — at little or no cost. Instead, I invested the time and energy to engage a person of expertise. I felt an accute sense of betrayal when my thoughts were met with the impersonal voice of a machine. This artificial sleight-of-hand is not what BetterHelp users are signing up for.”
The short seller Blue Orca Capital wrote a research report saying in part: “We are short Teladoc Health, Inc. … because evidence shows that even though patients on the BetterHelp platform pay for mental health therapy from licensed therapists, with meaningful frequency, patients unknowingly receive ‘therapy’ from AI. We think this is rotten and potentially harmful. BetterHelp knows this is wrong, because it warns on its website that therapy by AI dehumanizes patients and ‘may harm the mental health of clients who use it for therapy.’ Yet, in our opinion, BetterHelp gives its therapists perverse incentives to cut corners by using AI.”
In its earnings report, Chuck Divita, Chief Executive Officer of Teladoc Health, said: “In BetterHelp, while we were pleased with the sequential improvement in key metrics in the fourth quarter, the operating environment continues to be challenging and we remain focused on actions to stabilize results consistent with our overall virtual mental health strategy.” BetterHelp segment revenue decreased 10% to $249.8 million, the report added.
