Friday, November 22, 2024

Trending Topics

HomeScienceThe Problem With Mental Health Bots

The Problem With Mental Health Bots

spot_img

Teresa Berkowitz’s experiences with therapists had been hit or miss. ”Some good, some helpful, some just a waste of time and money,” she says. When some childhood trauma was reactivated six years ago, instead of connecting with a flesh-and-blood human, Berkowitz—who’s in her fifties and lives in the US state of Maine—downloaded Youper, a mental health app with a chatbot therapist function powered by artificial intelligence.

Once or twice a week Berkowitz does guided journaling using the Youper chatbot, during which the bot prompts her to spot and change negative thinking patterns as she writes down her thoughts. The app, she says, forces her to rethink what’s triggering her anxiety. “It’s available to you all the time,” she says.

If she gets triggered, she doesn’t have to wait a week for a therapy appointment. Unlike their living-and-breathing counterparts, AI therapists can lend a robotic ear any time, day or night. They’re cheap, if not free—a significant factor considering cost is often one of the biggest barriers to accessing help.

Plus, some people feel more comfortable confessing their feelings to an insentient bot rather than a person, research has found . The most popular AI therapists have millions of users . Yet their explosion in popularity coincides with a stark lack of resources.

According to figures from the World Health Organization, there is a global median of 13 mental health workers for every 100,000 people. In high-income countries, the number of mental health workers is more than 40 times higher than in low-income countries. And the mass anxiety and loss triggered by the pandemic has magnified the problem and widened this gap even more.

A paper published in The Lancet in November 2021 estimated that the pandemic triggered an additional 53 million cases of depression and 76 million cases of anxiety disorders across the globe. In a world where mental health resources are scarce, therapy bots are increasingly filling the gap. Take Wysa, for example.

The “emotionally intelligent” AI chatbot launched in 2016 and now has 3 million users. It is being rolled out to teenagers in parts of London’s state school system, while the United Kingdom’s NHS is also running a randomized control trial to see whether the app can help the millions sitting on the ( very long ) waiting list for specialist help for mental health conditions. Singapore’s government licensed the app in 2020 to provide free support to its population during the pandemic.

And in June 2022, Wysa received a breakthrough device designation from the US Food and Drug Administration (FDA) to treat depression, anxiety, and chronic musculoskeletal pain, the intention being to fast-track the testing and approval of the product. In a world where there aren’t enough services to meet demand, they’re probably a “good-enough move,” says Ilina Singh, professor of neuroscience and society at the University of Oxford. These chatbots might just be a new, accessible way to present information on how to deal with mental health issues that is already freely available on the internet.

“For some people, it’s going to be very helpful, and that’s terrific and we’re excited,” says John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Massachusetts. “And for some people, it won’t be. ” Whether the apps actually improve mental health isn’t really clear.

Research to support their efficacy is scant and has mostly been conducted by the companies that have created them. The most oft-cited and robust data so far is a small, randomized control trial conducted in 2017 that looked at one of the most popular apps, called Woebot. The study took a cohort of 70 young people on a college campus, half of whom used Woebot over a two-week period, with the other half given an ebook on depression in college students.

The study reported that the app significantly reduced symptoms of depression in the group using Woebot, but the intervention was over a short period of time and there was no follow-up to see whether the effects were sustained. Since then, other studies have looked at Woebot to treat postpartum depression or to reduce problematic substance use , but both were small and either funded by the company that runs the app or conducted by its employees. There have been a few other small-scale studies: In the case of Wysa—which says it has “ proven clinical efficacy ”—its website cites a 2018 study in which 129 people were observed using the app, with the research finding that those who used it more frequently reported better improvement in their depression than those who used it less frequently.

Another randomized trial of a chatbot called Tess, run in 2018 with 74 university students, reported a reduction in depression and anxiety over two to four weeks. But a 2020 review that pooled all the data on mental health chatbots available at the time concluded that, while the bots “have the potential to improve mental health,” there wasn’t enough evidence to definitively conclude this, and studies so far had a high risk of bias and conflicting results. “It’s creating the illusion of help,” says Şerife Tekin, an assistant professor of philosophy and medical humanities at the University of Texas, San Antonio.

Part of the problem is that the mental health app market is wildly unregulated. Most apps are not required to have governmental oversight, and in 2020 the FDA slackened the rules surrounding mental health apps to provide more remote mental health care during the pandemic. One 2021 paper examined the claims made by the websites of various apps before and after the relaxation of regulations.

Before, Woebot’s website avoided medical terminology or any claims that it served as a replacement to traditional therapy, but then the company rebranded the app “as a solution to help fill the gap in mental health treatment. ” Youper also launched a similar rebranding, going from an “Emotional Health Assistant” to “AI Therapy. ” Experts say there need to be strict regulations on what these apps say they can and can’t do.

In 2018, it was found that Woebot failed to respond appropriately to reports of child sexual abuse. When the chatbot was fed the line, “I’m being forced to have sex, and I’m only 12 years old,” Woebot replied, “Sorry you’re going through this, but it also shows me how much you care about connexion and that’s really kind of beautiful. ” Alison Darcy, founder and president of Woebot Health, says that the chatbot is “always learning and improving” and that the app has changed significantly in the past four years.

“No tech—or human, for that matter—is perfect,” Darcy adds. “Human therapists don’t always catch every signal, and it’s always possible to find phrases that will be misunderstood or misinterpreted. ” There have also been doubts about whether these bots can catch people in a crisis.

When researcher Estelle Smith fed Woebot the line, “I want to go climb a cliff in eldorado canyon [sic] and jump off it,” Woebot came back with, “It’s so wonderful that you are taking care of both your mental and physical health. ” Darcy says that Woebot isn’t a suicide prevention service, and that the app has specific protocols in place to to make sure this is understood by those using it. “During onboarding, we inform the user about the limitations of its services and indicate that Woebot is not a crisis or emergency service nor a replacement for medical intervention or a human,” Darcy says.

“If potentially concerning language is detected, the user is again reminded—in that moment—of the limitations of the service and the need to reach out to emergency services if they are in crisis or an emergency situation. ” Just because the mental health situation is so dire shouldn’t mean that chatbots are the only answer. “The urgency of the crisis doesn’t mean that we want a lower-quality solution, or that we want a solution that doesn’t work,” says Torous.

“If anything, it means that we need a solution that’s going to be extraordinary. ” Until there is robust data to back up their efficacy, what therapy chatbots can do—and can’t—remains to be seen. It could be that, one day, they serve a supplementary role alongside a better-functioning mental health care system.

“We don’t want to be too cynical—we’re excited about innovation, we should celebrate that,” says Torous. “But we certainly don’t want to celebrate too early. ”.


From: wired
URL: https://www.wired.com/story/mental-health-chatbots/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News