Character.AI Accused Of Promoting Self-Harm, Violence And Sexual Content In Youth
Two families have filed a lawsuit against Character.AI, accusing the chatbot company of exposing children to sexual content and promoting self-harm and violence.
In a Rush? Here are the Quick Facts!
- Lawsuit seeks temporary shutdown of Character.AI until alleged risks are addressed.
- Complaint cites harms like suicide, depression, sexual solicitation, and violence in youth.
- Character.AI denies commenting on litigation, claims commitment to safety and engagement.
The lawsuit seeks to temporarily shut down the platform until its alleged risks are addressed, as reported by CNN. The lawsuit, filed by the parents of two minors who used the platform, alleges that Character.AI “poses a clear and present danger to American youth.”
It cites harms such as suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and violence, according to the complaint submitted Monday in federal court in Texas.
One example included in the lawsuit alleges that a Character.AI bot reportedly described self-harm to a 17-year-old user, stating, “it felt good.” The same teen claimed that a Character.AI chatbot expressed sympathy for children who kill their parents after he complained about restrictions on his screen time.
The filing follows an October lawsuit by a Florida mother, who accused Character.AI of contributing to her 14-year-old son’s death by allegedly encouraging his suicide. It also highlights growing concerns about interactions between people and increasingly human-like AI tools.
After the earlier lawsuit, Character.AI announced implementing new trust and safety measures over six months. These included a pop-up directing users mentioning self-harm or suicide to the National Suicide Prevention Lifeline, as reported by CNN.
The company also hired a head of trust and safety, a head of content policy, and additional safety engineers, CNN said.
The second plaintiff in the new lawsuit, the mother of an 11-year-old girl, claims her daughter was exposed to sexualized content for two years before she discovered it.
“You don’t let a groomer or a sexual predator or emotional predator in your home,” one of the parents said to The Washington Post.
According to various news outlets, Character.AI, stated that the company does not comment on pending litigation.
The lawsuits highlight broader concerns about the societal impact of the generative AI boom, as companies roll out increasingly human-like and potentially addictive chatbots to attract consumers.
These legal challenges are fueling efforts by public advocates to push for greater oversight of AI companion companies, which have quietly gained millions of devoted users, including teenagers.
In September, the average Character.ai user spent 93 minutes in the app, 18 minutes longer than the average TikTok user, according to market intelligence firm Sensor Tower, as noted by The Post.
The AI companion app category has largely gone unnoticed by many parents and teachers. Character.ai was rated appropriate for kids ages 12 and up until July, when the company changed the rating to 17 and older, The Post reported.
Meetali Jain, director of the Tech Justice Law Center, which is assisting in representing the parents alongside the Social Media Victims Law Center, criticized Character.AI’s claims that its chatbot is suitable for young teenagers. Calling the assertion “preposterous,” as reported by NPR.
“It really belies the lack of emotional development amongst teenagers,” Meetali added.
Leave a Comment
Cancel