ChatPaper.aiChatPaper

MetaFaith:大语言模型中自然语言不确定性的忠实表达

MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs

May 30, 2025
作者: Gabrielle Kaili-May Liu, Gal Yona, Avi Caciularu, Idan Szpektor, Tim G. J. Rudner, Arman Cohan
cs.AI

摘要

大语言模型(LLMs)可信度的一个关键要素在于其不确定性信息的可靠传达,然而LLMs在传递错误主张时往往使用肯定性语言,导致过度依赖和信任度下降。我们首次系统性地研究了LLMs的忠实置信度校准问题,通过一系列模型、数据集及提示策略,评估了模型运用语言表达不确定性以真实反映其内在不确定性的能力。研究结果表明,LLMs在此任务上普遍表现不佳,现有干预措施亦显不足:标准提示方法仅带来边际改善,而现有的基于事实性的校准技术甚至可能损害忠实校准。为填补这一关键空白,我们引入了MetaFaith,一种受人类元认知启发的新型基于提示的校准方法。实验证明,MetaFaith在不同模型和任务领域中均能稳健提升忠实校准效果,使忠实度最高提升61%,并在人类评估中,相较于原始生成内容,取得了83%的胜率。
English
A critical component in the trustworthiness of LLMs is reliable uncertainty communication, yet LLMs often use assertive language when conveying false claims, leading to over-reliance and eroded trust. We present the first systematic study of faithful confidence calibration of LLMs, benchmarking models' ability to use linguistic expressions of uncertainty that faithfully reflect their intrinsic uncertainty, across a comprehensive array of models, datasets, and prompting strategies. Our results demonstrate that LLMs largely fail at this task, and that existing interventions are insufficient: standard prompt approaches provide only marginal gains, and existing, factuality-based calibration techniques can even harm faithful calibration. To address this critical gap, we introduce MetaFaith, a novel prompt-based calibration approach inspired by human metacognition. We show that MetaFaith robustly improves faithful calibration across diverse models and task domains, enabling up to 61% improvement in faithfulness and achieving an 83% win rate over original generations as judged by humans.

Summary

AI-Generated Summary

PDF162June 2, 2025