
image by freepik
AI Companions Fill Emotional Void For China’s Youth, Raising Ethical Concerns
Young people in China are increasingly turning to artificial intelligence for emotional support, with the Chinese chatbot DeepSeek emerging as a popular alternative to traditional counseling.
In a Rush? Here are the Quick Facts!
- Young Chinese users seek AI chatbots like DeepSeek for emotional support and counseling.
- DeepSeek’s responses are deeply resonant, sometimes bringing users to tears.
- DeepSeek offers personalized, empathetic conversations, filling an emotional void for young users.
Users describe the AI’s responses as deeply resonant, sometimes even bringing them to tears, as detailed in an article by the BBC.
Holly Wang, a 28-year-old from Guangzhou, has been using DeepSeek since its launch in January for what she calls “therapy sessions.” The AI chatbot has helped her process personal struggles, including the recent passing of her grandmother, as reported by the BBC.
“DeepSeek has been such an amazing counsellor. It has helped me look at things from different perspectives and does a better job than the paid counselling services I have tried,” she said, reported by BBC.
DeepSeek, a generative AI tool similar to OpenAI’s ChatGPT and Google’s Gemini, has gained national recognition for its superior performance compared to other Chinese AI apps. Beyond its advanced language abilities, it stands out by allowing users to see its reasoning process before delivering responses.
The BBC argues that for many young Chinese, AI is filling an emotional void. Economic challenges, high unemployment, and lingering effects of COVID-19 lockdowns have left many feeling uncertain about their futures. DeepSeek has become a source of comfort, offering personalized and empathetic responses.
When Holly first used the app, she asked it to write a tribute to her grandmother. The response was so moving that she felt an existential crisis.
DeepSeek replied: “Remember that all these words that make you shiver merely echo those that have long existed in your soul. I am but the occasional valley you’ve passed through, that allows you to hear the weight of your own voice.”
Reflecting on the exchange, she said to the BBC: “I don’t know why I teared up reading this. Perhaps because it’s been a long, long time since I received such comfort in real life.”
With Western AI models like ChatGPT blocked in China, DeepSeek has quickly become a preferred choice. Other Chinese AI models developed by Alibaba, Baidu, and ByteDance have struggled to match its capabilities, particularly in generating creative and literary content.
Beyond casual conversations, DeepSeek is increasingly seen as a counselor. Nan Jia, a professor at the University of Southern California, notes that AI chatbots “help people feel heard”, sometimes even more effectively than human counterparts, as reported by BBC.
However, concerns remain. The MIT researchers warned that AI is increasingly woven into our personal lives, taking on roles as friends, romantic partners, and mentors, and they cautioned that this technology could become highly addictive.
Despite privacy concerns, many users prioritize the chatbot’s emotional support over potential risks.The BBC reports one user wrote, “Its thought process is beautiful… It is an absolute blessing to people like me. Frankly, I can’t care less about the privacy concerns.”
Beyond emotional support, artificial intelligence is changing the way people think about death and how we remember those who have passed, as reported in a recent research analysis. AI technology can now create digital versions of the deceased, allowing posthumous interactions.
However, the authors of the analysis also note that digital grieving may complicate emotional closure by keeping memories too accessible.
At the same time, concerns about AI’s impact on youth are growing, with lawsuits filed against AI companion platforms. Character.AI, a chatbot company, is facing legal action from two families who claim it exposed minors to self-harm, violence, and sexual content.
The lawsuit argues that AI-generated interactions could be harmful, raising questions about how these technologies shape young users’ emotional well-being.
As AI becomes more integrated into mental health care, experts stress that it should complement human professionals, not replace them. While AI therapy tools can analyze vast datasets to offer personalized insights, they must be designed to ensure patient safety, privacy, and ethical use, as highlighted by the World Economic Forum.
Moreover, cybersecurity experts warn that AI chatbots, particularly those used for sensitive conversations, are vulnerable to hacking and data breaches.
Personal information shared with AI systems could be exploited, raising concerns about privacy, identity theft, and manipulation. Experts caution that as AI becomes more ingrained in mental health support, security measures must evolve to protect users from potential risks.
Leave a Comment
Cancel