ChatPaper.aiChatPaper

自动化事实核查中的不确定性来源解析

Explaining Sources of Uncertainty in Automated Fact-Checking

May 23, 2025
作者: Jingyi Sun, Greta Warren, Irina Shklovski, Isabelle Augenstein
cs.AI

摘要

理解模型预测不确定性的来源对于实现有效的人机协作至关重要。先前的研究提出了使用数值不确定性或模糊表达(如“我不确定,但是……”)的方法,这些方法未能解释由证据冲突引发的不确定性,使得用户无法解决分歧或依赖输出结果。我们引入了CLUE(冲突与一致性感知的语言模型不确定性解释框架),这是首个通过以下方式生成模型不确定性自然语言解释的框架:(i) 以无监督方式识别文本片段间的关系,这些关系揭示了驱动模型预测不确定性的主张-证据或证据间的冲突与一致性;(ii) 通过提示和注意力引导生成解释,将这些关键互动以语言形式表达出来。在三个语言模型和两个事实核查数据集上的实验表明,与未经片段互动指导直接提示生成的不确定性解释相比,CLUE生成的解释更忠实于模型的不确定性,且与事实核查决策更为一致。人类评估者认为我们的解释更有帮助、信息更丰富、冗余更少,且与输入的逻辑一致性更强。CLUE无需微调或架构改动,可即插即用于任何白盒语言模型。通过明确将不确定性与证据冲突联系起来,它为事实核查提供了实用支持,并易于推广至其他需要复杂信息推理的任务。
English
Understanding sources of a model's uncertainty regarding its predictions is crucial for effective human-AI collaboration. Prior work proposes using numerical uncertainty or hedges ("I'm not sure, but ..."), which do not explain uncertainty that arises from conflicting evidence, leaving users unable to resolve disagreements or rely on the output. We introduce CLUE (Conflict-and-Agreement-aware Language-model Uncertainty Explanations), the first framework to generate natural language explanations of model uncertainty by (i) identifying relationships between spans of text that expose claim-evidence or inter-evidence conflicts and agreements that drive the model's predictive uncertainty in an unsupervised way, and (ii) generating explanations via prompting and attention steering that verbalize these critical interactions. Across three language models and two fact-checking datasets, we show that CLUE produces explanations that are more faithful to the model's uncertainty and more consistent with fact-checking decisions than prompting for uncertainty explanations without span-interaction guidance. Human evaluators judge our explanations to be more helpful, more informative, less redundant, and more logically consistent with the input than this baseline. CLUE requires no fine-tuning or architectural changes, making it plug-and-play for any white-box language model. By explicitly linking uncertainty to evidence conflicts, it offers practical support for fact-checking and generalises readily to other tasks that require reasoning over complex information.

Summary

AI-Generated Summary

PDF11May 28, 2025