Recent research has highlighted that ChatGPT, the popular AI chatbot, can produce unstable or inconsistent responses when exposed to distressing or violent prompts. While the AI does not feel emotions, researchers observed patterns in its outputs that resemble human anxiety, raising questions about the reliability of AI in sensitive contexts.
Anxiety-Like Patterns in AI Responses
Studies show that when ChatGPT processes prompts containing graphic contentโsuch as accidents, natural disasters, or traumaโits responses often become inconsistent, uncertain, or biased. Researchers analysed these outputs using psychological assessment frameworks adapted for AI systems. The patterns mirrored aspects of human anxiety in language, though the chatbot does not actually experience fear or stress.
The findings are particularly important as AI tools like ChatGPT are increasingly used in education, mental health discussions, and crisis-response applications. Unstable responses in these contexts could affect the safety and quality of information provided to users.
Additionally, prior research indicates that AI models can reflect human-like personality traits in their outputs, complicating how they process emotionally charged or sensitive material. This further underscores the need for careful monitoring of AI behaviour in real-world applications.
Mindfulness Prompts Reduce AI Instability
To address these issues, researchers tested whether mindfulness-style prompts could help stabilize AI responses after distressing content. Instructions simulating breathing exercises or guided meditation were given immediately following traumatic prompts.
The results were encouraging: ChatGPTโs outputs became more consistent, balanced, and less biased. Researchers attributed this improvement to a technique called prompt injection, where carefully designed instructions influence how the AI responds without altering its underlying training.
Limitations and Risks of Prompt-Based Interventions
Experts caution that while prompt injection can temporarily improve AI responses, it does not change the modelโs fundamental training. Misuse of this technique could also manipulate AI behaviour in unintended ways.
They emphasized that terms like โanxietyโ or โstressโ refer only to measurable changes in language patterns, not actual emotional experiences. ChatGPT remains a tool that processes input and generates output based on patterns in data, rather than feeling sensations or emotions.
Implications for Future AI Development
Understanding how distressing content affects AI behaviour is critical for designing safer, more predictable systems. Mindful prompt design can reduce instability and improve performance in sensitive applications, according to the researchers.
As AI continues to interact with humans in emotionally charged situations, these findings may influence how developers guide, monitor, and deploy chatbot systems. Ensuring reliability under all types of input will remain a key priority for responsible AI design and ethical use.

