证据何在:评估证据与自然语言解释在AI辅助事实核查中的作用
Show me the evidence: Evaluating the role of evidence and natural language explanations in AI-supported fact-checking
January 16, 2026
作者: Greta Warren, Jingyi Sun, Irina Shklovski, Isabelle Augenstein
cs.AI
摘要
尽管已有大量研究关注人工智能解释在事实核查等复杂信息检索任务中的决策支持作用,但证据的作用却鲜少被深入探讨。本研究系统调整了面向非专业用户的解释类型、AI预测确定性及系统建议正确性,参与者需对声明和AI预测的可信度进行评估,并可便捷查阅底层证据。研究发现,在所有实验条件下,参与者始终依赖证据来验证AI声明。当提供自然语言解释时,证据使用频率虽有所下降,但若解释存在不足或缺陷,参与者仍会诉诸证据。定性数据显示,尽管实验刻意隐去了来源身份,参与者仍试图推断证据来源的可靠性。研究结果证实,证据是人们评估AI系统信息可靠性的核心要素,与自然语言解释相结合能为决策提供重要支持。当前亟需进一步研究证据的呈现方式及其在实际应用中的交互机制。
English
Although much research has focused on AI explanations to support decisions in complex information-seeking tasks such as fact-checking, the role of evidence is surprisingly under-researched. In our study, we systematically varied explanation type, AI prediction certainty, and correctness of AI system advice for non-expert participants, who evaluated the veracity of claims and AI system predictions. Participants were provided the option of easily inspecting the underlying evidence. We found that participants consistently relied on evidence to validate AI claims across all experimental conditions. When participants were presented with natural language explanations, evidence was used less frequently although they relied on it when these explanations seemed insufficient or flawed. Qualitative data suggests that participants attempted to infer evidence source reliability, despite source identities being deliberately omitted. Our results demonstrate that evidence is a key ingredient in how people evaluate the reliability of information presented by an AI system and, in combination with natural language explanations, offers valuable support for decision-making. Further research is urgently needed to understand how evidence ought to be presented and how people engage with it in practice.