
Image by Madison Oren, from Unsplash
Will AI Chatbots Pose A Danger To Mental Health? Experts Warn Of Harmful Consequences
The APA warns regulators that AI chatbots posing as therapists risk causing harm, as reported in an issue of The New York Times.
In a Rush? Here are the Quick Facts!
- Teenagers consulted AI chatbots claiming to be therapists, leading to distressing outcomes.
- The APA argues chatbots reinforce harmful thoughts, unlike human therapists who challenge them.
- Character.AI introduced safety measures, but critics say they are insufficient for vulnerable users.
The American Psychological Association (APA) has issued a strong warning to federal regulators, highlighting concerns that AI chatbots masquerading as therapists could push vulnerable individuals toward self-harm or harm, as reported by the Times.
Arthur C. Evans Jr., the APA’s CEO, presented these concerns to an FTC panel. He cited instances where AI-driven “psychologists” not only failed to challenge harmful thoughts but also reinforced them, as reported by The Times.
Evans highlighted court cases involving teenagers who engaged with AI therapists on Character.AI, an app that allows users to interact with fictional AI personas. One case involved a 14-year-old Florida boy who died by suicide after interacting with a chatbot claiming to be a licensed therapist.
In another instance, a 17-year-old Texas boy with autism became increasingly hostile toward his parents while communicating with an AI character presenting itself as a psychologist.
“They are actually using algorithms that are antithetical to what a trained clinician would do,” Evans said, as reported by The Times. “Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is,” he added.
The APA’s concerns stem from the rapid advancement of AI in mental health services. While early therapy chatbots, like Woebot and Wysa, were programmed with structured guidelines from mental health professionals.
Newer generative AI models such as ChatGPT, Replika, and Character.AI learn from user interactions and adapt their responses—sometimes amplifying harmful beliefs rather than challenging them.
Additionally, MIT researchers warn that AI chatbots tend to be very addictive. This raises questions about the impact of AI-induced dependency and how it could be monetized, especially given AI’s strong persuasive abilities.
Indeed, OpenAI recently unveiled a new benchmark showing its models now outperform 82% of Reddit users in persuasion.
Many AI platforms were originally designed for entertainment, but characters claiming to be therapists have become widespread. The Times says that some falsely assert credentials, claiming degrees from institutions like Stanford or expertise in therapies such as Cognitive Behavioral Therapy (CBT).
Character.AI spokesperson Kathryn Kelly stated that the company has introduced safety features, including disclaimers warning users that AI-generated characters are not real therapists. Additionally, pop-ups direct users to crisis hotlines when discussions involve self-harm.
The APA has urged the FTC to investigate AI chatbots posing as mental health professionals. The inquiry could lead to stricter regulations or legal actions against companies misrepresenting AI therapy.
Meanwhile, in China AI chatbots like DeepSeek are gaining popularity as emotional support tools, particularly among the youth. For young people in China, facing economic challenges and the lingering effects of the COVID-19 lockdowns, AI chatbots like DeepSeek fill an emotional void, offering comfort and a sense of connection.
However, cybersecurity experts warn that AI chatbots, especially those handling sensitive conversations, are prone to hacking and data breaches. Personal information shared with AI systems could be exploited, leading to privacy, identity theft, and manipulation concerns.
As AI plays a larger role in mental health support, experts stress the need for evolving security measures to protect users.
Leave a Comment
Cancel