MentraSuite:面向心理健康推理与评估的大语言模型训练后优化
MentraSuite: Post-Training Large Language Models for Mental Health Reasoning and Assessment
December 10, 2025
作者: Mengxi Xiao, Kailai Yang, Pengde Zhao, Enze Zhang, Ziyan Kuang, Zhiwei Liu, Weiguang Han, Shu Liao, Lianting Huang, Jinpeng Hu, Min Peng, Qianqian Xie, Sophia Ananiadou
cs.AI
摘要
全球有數億人受心理健康問題困擾,而網路已成為獲取支持、資訊和評估的主要媒介。大型語言模型(LLMs)能提供可擴展且易獲取的協助,但其在心理健康場景中的部署仍存在風險,尤其是當模型推理存在不完整、不一致或缺乏依據時。現有的心理學LLMs側重情感理解或知識檢索,卻忽略了臨床實踐所需的階梯式推理能力——包括評估、診斷、干預規劃、抽象歸納和驗證等環節。為解決這些問題,我們提出MentraSuite這一推進可靠心理健康推理的統一框架。我們構建了MentraBench綜合基準,涵蓋五大推理維度、六類任務和13個數據集,從簡潔性、連貫性、幻覺規避、任務理解力和內在一致性五個層面評估任務表現與推理質量。進一步地,我們開發了Mindora模型,通過混合SFT-RL框架進行優化,並引入不一致性檢測獎勵機制以確保忠實連貫的推理。為支持訓練,我們採用創新的推理軌跡生成策略構建高質量數據軌跡:策略性篩選困難樣本,並通過結構化、以一致性為導向的重寫流程,生成簡明可讀且均衡的推理軌跡。在評估的20個LLMs中,Mindora在MentraBench上取得最高綜合表現,並在推理可靠性方面展現卓越能力,證實其處理複雜心理健康場景的有效性。
English
Mental health disorders affect hundreds of millions globally, and the Web now serves as a primary medium for accessing support, information, and assessment. Large language models (LLMs) offer scalable and accessible assistance, yet their deployment in mental-health settings remains risky when their reasoning is incomplete, inconsistent, or ungrounded. Existing psychological LLMs emphasize emotional understanding or knowledge recall but overlook the step-wise, clinically aligned reasoning required for appraisal, diagnosis, intervention planning, abstraction, and verification. To address these issues, we introduce MentraSuite, a unified framework for advancing reliable mental-health reasoning. We propose MentraBench, a comprehensive benchmark spanning five core reasoning aspects, six tasks, and 13 datasets, evaluating both task performance and reasoning quality across five dimensions: conciseness, coherence, hallucination avoidance, task understanding, and internal consistency. We further present Mindora, a post-trained model optimized through a hybrid SFT-RL framework with an inconsistency-detection reward to enforce faithful and coherent reasoning. To support training, we construct high-quality trajectories using a novel reasoning trajectory generation strategy, that strategically filters difficult samples and applies a structured, consistency-oriented rewriting process to produce concise, readable, and well-balanced trajectories. Across 20 evaluated LLMs, Mindora achieves the highest average performance on MentraBench and shows remarkable performances in reasoning reliability, demonstrating its effectiveness for complex mental-health scenarios.