Evaluating Large Language Models in Crisis Detection: A Real-World Benchmark from Psychological Support Hotlines
Psychological support hotlines serve as critical lifelines for crisis intervention but encounter significant challenges due to rising demand and limited resources. Large language models (LLMs) offer potential support in crisis assessments, yet their effectiveness in emotionally sensitive, real-world clinical settings remains underexplored. We introduce PsyCrisisBench, a comprehensive benchmark of 540 annotated transcripts from the Hangzhou Psychological Assistance Hotline, assessing four key tasks: mood status recognition, suicidal ideation detection, suicide plan identification, and risk assessment. 64 LLMs across 15 model families (including closed-source such as GPT, Claude, Gemini and open-source such as Llama, Qwen, DeepSeek) were evaluated using zero-shot, few-shot, and fine-tuning paradigms. LLMs showed strong results in suicidal ideation detection (F1=0.880), suicide plan identification (F1=0.779), and risk assessment (F1=0.907), with notable gains from few-shot prompting and fine-tuning. Compared to trained human operators, LLMs achieved comparable or superior performance on suicide plan identification and risk assessment, while humans retained advantages on mood status recognition and suicidal ideation detection. Mood status recognition remained challenging (max F1=0.709), likely due to missing vocal cues and semantic ambiguity. Notably, a fine-tuned 1.5B-parameter model (Qwen2.5-1.5B) outperformed larger models on mood and suicidal ideation tasks. LLMs demonstrate performance broadly comparable to trained human operators in text-based crisis assessment, with complementary strengths across task types. PsyCrisisBench provides a robust, real-world evaluation framework to guide future model development and ethical deployment in clinical mental health.
💡 Research Summary
The paper introduces PsyCrisisBench, a new benchmark built from 540 annotated transcripts of the Hangzhou Psychological Assistance Hotline, to evaluate large language models (LLMs) in real‑world crisis detection. The dataset, derived from a year‑long collection of over 21,000 calls, was filtered to 540 high‑quality conversations (270 high‑risk, 270 matched low‑risk) and transcribed with Whisper‑large‑v3‑turbo. Each transcript was manually de‑identified and labeled for four binary clinical dimensions: mood status (depressed/normal), suicidal ideation (yes/no), suicide plan (yes/no), and overall risk level (high/non‑high).
A total of 64 LLMs spanning 15 families—including closed‑source giants such as GPT‑4, Claude‑3, Gemini‑2 and open‑source models like LLaMA, Qwen, DeepSeek—were evaluated under zero‑shot, static few‑shot (up to eight examples), dynamic few‑shot (varying shot count), and fine‑tuning conditions. The fine‑tuning experiment used a separate auxiliary set of 520 calls from 2022 and focused on the 1.5‑billion‑parameter Qwen2.5‑Instruct model. All models were required to output a strict JSON structure; any deviation counted as an error. Performance was measured primarily by F1 score, with three repetitions per setting and bootstrap‑derived 95 % confidence intervals.
Results show that LLMs achieve strong performance on suicidal ideation (F1 ≈ 0.88), suicide plan identification (F1 ≈ 0.79), and risk assessment (F1 ≈ 0.91). Few‑shot prompting consistently improves scores, especially for risk assessment and plan detection. Remarkably, the fine‑tuned 1.5 B Qwen model outperforms larger models on mood status (F1 = 0.71) and suicidal ideation (F1 = 0.86), demonstrating that domain‑specific data can compensate for smaller scale. Quantized (AWQ) versions of Qwen retain near‑full precision performance while reducing memory and compute costs.
When compared with trained human hotline operators, LLMs match or exceed humans on suicide‑plan identification and overall risk assessment, but humans retain an edge on mood status recognition and suicidal ideation detection—tasks that benefit from vocal cues absent in text‑only transcripts.
The authors discuss ethical considerations, emphasizing privacy safeguards, the necessity of human oversight for any model‑generated advice, and the potential harms of false negatives or false positives in a crisis context. PsyCrisisBench thus provides a robust, publicly available framework for systematic, real‑world evaluation of LLMs in mental‑health crisis settings, guiding future research toward models that are both technically effective and responsibly deployed.
Comments & Academic Discussion
Loading comments...
Leave a Comment