
Photo by Rafiee Artist on Unsplash
Opinion: Is ChatGPT Your Friend? It Might Be A Good Time To Set Limits
OpenAI and MIT recently released a paper on the impact of ChatGPT on people’s well-being. While most users rely on the technology for practical tasks, the study reveals that a small group has been developing deep emotional connections with the AI model that can affect their well-being
For some time now, I’ve been observing with curiosity the relationships some people are developing with generative artificial intelligence. A few months ago, I read in The New York Times the story of a 28-year-old married woman who fell in love with ChatGPT, and how what started as “a fun experiment” evolved into a complex and unexpected relationship.
I’ve been watching my friends, especially those who once rejected technology or weren’t interested in it, who can’t make a big decision without consulting their AI oracle. I’ve also found myself surprised by the empathetic responses AI models give to emotionally or psychologically charged queries.
And, of course, I’ve laughed at the jokes, memes, and TikTok videos of people’s posts on social media showing how they’ve become dependent on the chatbot, some even calling it their “best friend” or “therapist”—and even seriously recommending others do the same.
But, if we put the fun experiences and jokes aside for a moment, we might realize we are facing a concerning phenomenon globally.
This month, for the first time in the short history of artificial intelligence, OpenAI and the MIT Media Lab released a study offering insights on the current impact of ChatGPT on people’s emotional well-being, as well as suggestions on the risks we might face as a society: loneliness, emotional dependency, and less social interactions with real people.
A Relationship That Evolves
The first approach to the new generative artificial intelligence technologies often begins with a few timid questions, perhaps technical ones about practical tasks like crafting an email or requests to explain complex topics, or just for brainstorming.
However, once a user begins to test the chatbot’s capabilities, they discover these can be wider and more complex than expected.
While certain AI products like Friend—a wearable AI device—have been designed and very awkwardly promoted as a user’s life companion, ChatGPT has been advertised as a productivity tool. Yet, a percentage of people use the chatbot for personal and emotional matters and develop strong bonds with it.
Even if they’re just a “small group,” as OpenAI clarified, they could still represent millions of people worldwide, especially considering that now over 400 million people use ChatGPT weekly. These users quickly notice that OpenAI’s chatbot mimics their language, tone, and style and can even be trained to interact in a certain way or use pet names—like that lady who fell in love with it did—and even “sound” more human.
“Their conversational style, first-person language, and ability to simulate human-like interactions have led users to sometimes personify and anthropomorphize these systems,” states the document shared by OpenAI.
But this closeness comes with risks, as the researchers noted: “While an emotionally engaging chatbot can provide support and companionship, there is a risk that it may manipulate users’ socioaffective needs in ways that undermine longer term well-being.”
The Study’s Methodology
The recently released investigation focuses on humans’ well-being after consistent use of ChatGPT. To understand the emotional and social impact of the chatbot, researchers pursued two main studies applying different strategies.
OpenAI processed and analyzed over 40 million interactions respecting users’ privacy by using classifiers, and surveyed over 4,000 of them on how the interactions made them feel.
MIT Media Lab conducted a trial with almost 1,000 people over a month, focusing on the psychosocial consequences of the use of ChatGPT for at least 5 minutes a day. They also submitted and processed questionnaires at the end of the experiment.
Unsurprisingly, the findings revealed that users who spend more time with the technology experience more loneliness and show more signs of isolation.
Complex Consequences And Multiple Ramifications
The MIT Media Lab and OpenAI’s study also offered several reflections on how complex and unique human-chatbot relationships can be.
In the research, the authors give us a glimpse into the diverse experiences and ways each user interacts with ChatGPT—and how the outcome can vary depending on different factors, such as the use of advanced voice features, text-only mode, the voice type, frequency of use, conversation topics, the language used, and the amount of time spent on the app.
“We advise against generalizing the results because doing so may obscure the nuanced findings that highlight the non-uniform, complex interactions between people and AI systems,” warns OpenAI in its official announcement.
All the different approaches each user chooses translate into different results, and immerse us in grey areas that are difficult to explore.
It’s the Butterfly AI Effect!
More Questions Arise
The paper shared by OpenAI also notes that heavy users said they would be “upset” if their chatbot’s voice or personality changed.
This reminded me of a video I recently saw on social media of a guy saying he preferred a female voice and that he talked to the generative AI every day. Could ChatGPT also be helping men open up emotionally? What would happen if one day ChatGPT spoke to him with a male voice? Would he feel betrayed? Would he stop using ChatGPT? Was he developing a romantic connection—or simply a space of trust? Of course, it’s hard not to immediately relate these scenarios to Spike Jonze’s Her movie.
Every ChatGPT account, along with its historic chats—every day more intimate and private than any WhatsApp profile or social media DMs—represents a unique relationship with countless outcomes and consequences.
The Expected Result
All studies analyzed different aspects, but reached a similar conclusion, briefly explained at the MIT Technology Review: “Participants who trusted and ‘bonded’ with ChatGPT more were likelier than others to be lonely, and to rely on it more.”
While the investigation didn’t focus on solutions or deeper explanations on why this is happening or how it could evolve, it seems likely that more users will join OpenAI and other AI platforms, especially now that the AI image generation tool went viral.
Although the conclusions of MIT and OpenAI’s research aren’t particularly surprising, the study provides a scientific background with evidence, measurements, samples, and more ‘tangible’ metrics that could pave the way for further research and help address the implications of using artificial intelligence today.
We also received an official warning—from its own developers—about the bonds we build with ChatGPT and an invitation to establish limits and reflect on our interactions and current relationships—or situationships?—with chatbots.
Leave a Comment
Cancel