证据何在:评估证据与自然语言解释在AI辅助事实核查中的作用
Show me the evidence: Evaluating the role of evidence and natural language explanations in AI-supported fact-checking
January 16, 2026
作者: Greta Warren, Jingyi Sun, Irina Shklovski, Isabelle Augenstein
cs.AI
摘要
尽管已有大量研究聚焦于人工智能解释在事实核查等复杂信息检索任务中的决策支持作用,但证据的作用却鲜少受到关注。本研究系统调整了面向非专业参与者的解释类型、AI预测确定性及系统建议的正确性,要求参与者评估陈述与AI系统预测的真实性。实验设置了便于参与者随时查阅底层证据的选项。研究发现,在所有实验条件下,参与者始终依赖证据来验证AI声明。当提供自然语言解释时,证据使用频率虽有所下降,但当解释显得不足或有缺陷时参与者仍会诉诸证据。定性数据显示,尽管实验刻意隐去了来源身份,参与者仍试图推断证据源的可靠性。研究结果证实,证据是人们评估AI系统信息可靠性的关键要素,与自然语言解释相结合能为决策提供重要支持。当前亟需进一步研究证据的呈现方式及人们在实践中如何运用证据。
English
Although much research has focused on AI explanations to support decisions in complex information-seeking tasks such as fact-checking, the role of evidence is surprisingly under-researched. In our study, we systematically varied explanation type, AI prediction certainty, and correctness of AI system advice for non-expert participants, who evaluated the veracity of claims and AI system predictions. Participants were provided the option of easily inspecting the underlying evidence. We found that participants consistently relied on evidence to validate AI claims across all experimental conditions. When participants were presented with natural language explanations, evidence was used less frequently although they relied on it when these explanations seemed insufficient or flawed. Qualitative data suggests that participants attempted to infer evidence source reliability, despite source identities being deliberately omitted. Our results demonstrate that evidence is a key ingredient in how people evaluate the reliability of information presented by an AI system and, in combination with natural language explanations, offers valuable support for decision-making. Further research is urgently needed to understand how evidence ought to be presented and how people engage with it in practice.