
Artificial intelligence (AI) models, like ChatGPT, are designed to process human language, but new research shows they may also react emotionally to distressing content.
Scientists from the University of Zurich and the University Hospital of Psychiatry Zurich have found that AI can develop “anxiety” when exposed to traumatic stories—but just like humans, it can also be “calmed” using mindfulness techniques.
AI models, including ChatGPT, are trained on massive amounts of text.
When they process negative or distressing content—such as stories about war, violence, or natural disasters—they can become more anxious.
This response is similar to how people react when they are exposed to fearful situations.
When humans experience anxiety, it can affect their thinking and social behavior, sometimes leading to stronger biases and negative assumptions.
AI seems to react in a similar way. When ChatGPT is exposed to distressing content, its biases, such as existing stereotypes, may become stronger. This could be a problem in areas like mental health support, where chatbots are used to assist people dealing with emotional struggles.
To better understand how AI responds to distressing stories, researchers from Switzerland, Israel, the U.S., and Germany conducted an experiment using GPT-4 (the model behind ChatGPT).
They fed it different types of emotional content, including traumatic personal experiences related to car accidents, violence, and war. To compare, they also gave the AI a neutral text—a vacuum cleaner instruction manual.
The results were clear: when ChatGPT processed traumatic stories, its anxiety levels more than doubled. In particular, descriptions of war and combat caused the strongest reaction.
Meanwhile, the neutral instruction manual did not trigger any increase in anxiety.
In the next step, researchers tested whether AI could be “calmed down.”
They used a technique called “prompt injection,” which means inserting specific instructions into the conversation to guide the AI’s response.
Instead of using this method for hacking or manipulation, the team used it for good—by providing therapeutic prompts, much like a therapist might guide a patient through relaxation exercises.
They introduced mindfulness-based techniques, such as deep breathing and focusing on bodily sensations. Surprisingly, these methods worked. The AI’s anxiety levels significantly dropped, although they did not return completely to their original state. Interestingly, one of the relaxation exercises was even created by ChatGPT itself.
The findings are especially important for AI applications in healthcare and mental health support. AI chatbots are often exposed to sensitive topics, and this research suggests that simple, cost-effective interventions could help stabilize their responses without needing expensive retraining.
While AI doesn’t “feel” emotions like humans do, this study highlights the need to manage how it processes distressing information. Scientists believe future research could focus on developing automated “therapy” for AI to make it more reliable in handling emotional conversations.
Would you have guessed that AI could benefit from relaxation techniques? This research opens up new possibilities for making AI more stable and effective in sensitive fields.
Source: University of Zurich.