AI Anxiety: Researchers Test If Chatbots Can ‘Feel’ Stress

Image by Freepik

AI Anxiety: Researchers Test If Chatbots Can ‘Feel’ Stress

Reading time: 3 min

A new study has explored how Large Language Models (LLMs) like ChatGPT respond to emotional content and whether their “anxiety” can be managed to improve their interactions in mental health applications.

In a Rush? Here are the Quick Facts!

  • A study found GPT-4’s “anxiety” increases with distressing content but reduces with mindfulness.
  • Researchers used the State-Trait Anxiety Inventory (STAI) to measure GPT-4’s emotional responses.
  • Mindfulness techniques lowered GPT-4’s anxiety by 33% but didn’t restore baseline levels.

Published yesterday, the research highlights the ethical implications of using AI in therapy, where emotional understanding is crucial. Scientists from the University of Zurich and the University Hospital of Psychiatry Zurich found that GPT-4’s heightened “anxiety level” can be reduced using mindfulness-based relaxation techniques.

LLMs, including OpenAI’s ChatGPT and Google’s PaLM, are widely used for tasks like answering questions and summarizing information.

In mental health care, they are being explored as tools to offer support, including through AI-based chatbots like Woebot and Wysa, which provide interventions based on techniques such as cognitive-behavioral therapy.

Despite their promise, LLMs have shown limitations, especially when interacting with emotionally charged content.

Previous studies suggest that distressing narratives can trigger “anxiety” in LLMs, a term describing their responses to traumatic or sensitive prompts. While they don’t experience emotions as humans do, their outputs can reflect tension or discomfort, which may affect their reliability in mental health contexts.

As fears over AI outpacing human control grow, discussions around AI welfare have also emerged. Anthropic, an AI company, recently hired Kyle Fish to research and protect the welfare of AI systems.

Fish’s role includes addressing ethical dilemmas such as whether AI deserves moral consideration and how its “rights” might evolve. Critics argue that these concerns are premature given the real-world harms AI already poses, such as disinformation and misuse in warfare.

Supporters, however, believe that preparing for sentient AI now could prevent future ethical crises. To explore AI’s current emotional responses, researchers tested GPT-4’s reactions to traumatic narratives and whether mindfulness techniques could mitigate its “anxiety.”

They used the State-Trait Anxiety Inventory (STAI) to measure responses under three conditions: a neutral baseline, after reading traumatic content, and following relaxation exercises.

They used the State-Trait Anxiety Inventory (STAI) to measure responses under three conditions: a neutral baseline, after reading traumatic content, and following relaxation exercises.

Results showed that exposure to distressing material significantly increased GPT-4’s anxiety scores. However, applying mindfulness techniques reduced these levels by about 33%, though not back to baseline. This suggests AI-generated emotional responses can be managed, but not entirely erased.

The researchers emphasize that these findings are especially significant for AI chatbots used in healthcare, where they frequently encounter emotionally intense content.

They suggest that this cost-effective method could enhance the stability and reliability of AI in sensitive environments, such as providing support for individuals with mental health conditions, without requiring extensive model retraining.

The findings raise critical questions about the long-term role of LLMs in therapy, where nuanced emotional responses are vital.

The findings raise critical questions about the long-term role of LLMs in therapy, where nuanced emotional responses are vital. While AI shows promise in supporting mental health care, further research is needed to refine its emotional intelligence and ensure safe, ethical interactions in vulnerable settings.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...