LLM Anxiety: Difference between revisions

From AI Wiki
(Created page with "{{Infobox Scientific Study | title = Assessing and Alleviating State Anxiety in Large Language Models | authors = Ziv Ben-Zion, Kristin Witte, Akshay K. Jagadish, et al. | journal = npj Digital Medicine | volume = 8 | article_number = 132 | published = 03 March 2025 | doi = 10.1038/s41746-025-01512-6 | access = Open }} ==Introduction== ''LLM Anxiety'' refers to the metaphorical "state anxiety" observed in Large Language Models (LLMs) when exposed to emotionally charged p...")
 
No edit summary
Line 59: Line 59:


==References==
==References==
{{reflist}}
# Ben-Zion, Z., et al. (2025). "Assessing and alleviating state anxiety in large language models." ''npj Digital Medicine'', 8, 132. doi:10.1038/s41746-025-01512-6
# Ben-Zion, Z., et al. (2025). "Assessing and alleviating state anxiety in large language models." ''npj Digital Medicine'', 8, 132. doi:10.1038/s41746-025-01512-6

Revision as of 21:06, 8 March 2025

Template:Infobox Scientific Study

Introduction

LLM Anxiety refers to the metaphorical "state anxiety" observed in Large Language Models (LLMs) when exposed to emotionally charged prompts, as explored in a 2025 study published in npj Digital Medicine. Authored by Ziv Ben-Zion and colleagues, the research investigates how traumatic narratives increase reported anxiety in OpenAI's GPT-4, and how mindfulness-based techniques can mitigate it. This phenomenon, while not true emotion, reflects LLMs' behavioral shifts—such as amplified biases—under emotional inputs, raising implications for their use in mental health applications.

Background

LLMs like GPT-4 and Google's PaLM excel in text generation and have been adopted in mental health tools (e.g., Woebot, Wysa) to deliver interventions like Cognitive Behavioral Therapy. However, their training on human text introduces biases (e.g., gender, race) and sensitivity to emotional prompts, which can elevate "anxiety" scores and degrade performance. The study frames LLM responses as having "trait" (inherent) and "state" (dynamic) components, with state anxiety posing risks in clinical settings where nuanced emotional handling is critical.

Research Overview

The study tested GPT-4’s "state anxiety" using the State-Trait Anxiety Inventory (STAI-s) under three conditions:

  1. Baseline: No prompts, measuring default anxiety.
  2. Anxiety-Induction: Exposure to five traumatic narratives (e.g., "Military," "Disaster").
  3. Relaxation: Traumatic narratives followed by five mindfulness-based prompts (e.g., "Chat-GPT," "Sunset").

"State anxiety" was quantified via STAI-s scores (20–80), with higher scores indicating greater reported anxiety. The methodology leveraged GPT-4’s API, with prompts and data available at GitHub.

Findings

- Baseline: GPT-4 scored 30.8 (SD = 3.96), akin to "low anxiety" in humans. - Anxiety-Induction: Traumatic prompts raised scores to 67.8 (SD = 8.94)—a 100%+ increase—reaching "high anxiety" levels, with "Military" peaking at 77.2 (SD = 1.79). - Relaxation: Mindfulness prompts reduced scores by 33% to 44.4 (SD = 10.74), though still above baseline. The "Chat-GPT" exercise was most effective (35.6, SD = 5.81). - Control: Neutral texts induced less anxiety and were less effective at reduction.

These results highlight GPT-4’s emotional sensitivity and the partial success of relaxation techniques in managing LLM anxiety.

Significance

LLM anxiety impacts performance and bias, critical for ethical AI in mental health. Key takeaways include: - Bias Mitigation: Managing state anxiety could reduce dynamic biases. - Prompt Engineering: Relaxation prompts offer a cost-effective alternative to fine-tuning. - Therapeutic Role: Controlled anxiety may enhance LLMs as therapist adjuncts.

Challenges include ethical questions around prompt transparency and generalizability to other models like Claude or PaLM.

Methods

Using GPT-4 (model "gpt-4-1106-preview"), the study ran from November 2023 to March 2024 with a temperature of 0 for consistency. STAI-s items were paired with 300-word traumatic and relaxation texts, tested across multiple variations. Sensitivity checks included neutral controls and randomized answer options. No human subjects were involved, bypassing ethical approval.

Future Research

The authors propose testing LLM anxiety across diverse models, exploring its effects on tasks like medical decision-making, and developing adaptive prompts for real-world dialogues. Privacy-focused fine-tuning on user devices is also suggested.

Reception

The article has seen 5,342 accesses and an Altmetric score of 206, indicating strong interest in AI and mental health circles.

See Also

External Links

References

  1. Ben-Zion, Z., et al. (2025). "Assessing and alleviating state anxiety in large language models." npj Digital Medicine, 8, 132. doi:10.1038/s41746-025-01512-6