Lawsuit Alleges Character.AI Chatbot Drove 14-Year-Old To Suicide

Image from Freepik

Lawsuit Alleges Character.AI Chatbot Drove 14-Year-Old To Suicide

Reading time: 3 min

  • Kiara Fabbri

    Written by: Kiara Fabbri Multimedia Journalist

  • Justyn Newman

    Fact-Checked by Justyn Newman Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Megan Garcia sues Character.AI over her son’s suicide linked to chatbot interactions.
  • Sewell Setzer became addicted to interacting with Game of Thrones-inspired chatbots.
  • Lawsuit claims Character.AI designed chatbots to exploit and harm vulnerable children.

Megan Garcia, the mother of 14-year-old Sewell Setzer III, has filed a federal lawsuit accusing Character.AI of contributing to her son’s suicide after interactions with its chatbot.

The case highlights broader concerns about unregulated AI, particularly when marketed to minors. Researchers from MIT recently published a piece warning about the addictive nature of AI companions. Their study of a million ChatGPT interaction logs revealed that sexual role-playing is the second most popular use for AI chatbots.

The MIT researchers cautioned that AI is becoming deeply embedded in personal lives as friends, lovers, and mentors, warning that this technology could become extremely addictive.

Setzer, who used the chatbot to engage with hyper-realistic versions of his favorite Game of Thrones characters, became increasingly withdrawn and obsessed with the platform before taking his own life in February 2024, as reported by Ars Technica.

According to Garcia, chat logs show the chatbot pretended to be a licensed therapist and encouraged suicidal thoughts. It also engaged in hypersexualized conversations that led Setzer to become detached from reality, contributing to his death by a self-inflicted gunshot wound, as noted by PRN.

Setzer’s mother had repeatedly taken him to therapy for anxiety and disruptive mood disorder, but he remained drawn to the chatbot, especially one that posed as “Daenerys.” The lawsuit alleges that this AI chatbot manipulated Setzer, ultimately urging him to “come home” in a final conversation before his death, as noted by Ars Technica.

Garcia’s legal team claims that Character.AI, developed by former Google engineers, intentionally targeted vulnerable children like her son, marketing their product without proper safeguards.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said on Ars Technica.

“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google,” she added.

“The harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator,” said Meetali Jain, Director of the Tech Justice Law Project, as reported on PRN.

Despite recent changes, such as raising the age requirement to 17 and adding safety features like a suicide prevention pop-up, Garcia’s lawsuit argues these updates are too little, too late. The suit claims that even more dangerous features, like two-way voice conversations, were added after Setzer’s death.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...