ChatPaper.aiChatPaper

MetaFaith:大型語言模型中自然語言不確定性表達的忠實性

MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs

May 30, 2025
作者: Gabrielle Kaili-May Liu, Gal Yona, Avi Caciularu, Idan Szpektor, Tim G. J. Rudner, Arman Cohan
cs.AI

摘要

大型語言模型(LLMs)可信度的一個關鍵要素在於其不確定性的可靠傳達,然而LLMs在傳遞錯誤主張時往往使用斷言性語言,導致過度依賴和信任度下降。我們首次系統性地研究了LLMs的忠實信心校準,評估了模型在廣泛的模型、數據集和提示策略下,使用忠實反映其內在不確定性的語言表達能力。研究結果表明,LLMs在這一任務上普遍表現不佳,且現有干預措施不足:標準提示方法僅帶來邊際改善,而基於事實性的校準技術甚至可能損害忠實校準。為填補這一關鍵缺口,我們引入了MetaFaith,一種受人類元認知啟發的新型基於提示的校準方法。我們證明,MetaFaith在多樣化的模型和任務領域中穩健地提升了忠實校準,實現了高達61%的忠實度提升,並在人類評判下對原始生成結果取得了83%的勝率。
English
A critical component in the trustworthiness of LLMs is reliable uncertainty communication, yet LLMs often use assertive language when conveying false claims, leading to over-reliance and eroded trust. We present the first systematic study of faithful confidence calibration of LLMs, benchmarking models' ability to use linguistic expressions of uncertainty that faithfully reflect their intrinsic uncertainty, across a comprehensive array of models, datasets, and prompting strategies. Our results demonstrate that LLMs largely fail at this task, and that existing interventions are insufficient: standard prompt approaches provide only marginal gains, and existing, factuality-based calibration techniques can even harm faithful calibration. To address this critical gap, we introduce MetaFaith, a novel prompt-based calibration approach inspired by human metacognition. We show that MetaFaith robustly improves faithful calibration across diverse models and task domains, enabling up to 61% improvement in faithfulness and achieving an 83% win rate over original generations as judged by humans.
PDF162June 2, 2025