ChatPaper.aiChatPaper

展示工作過程:事實查核者對可解釋自動化事實查核的需求

Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking

February 13, 2025
作者: Greta Warren, Irina Shklovski, Isabelle Augenstein
cs.AI

摘要

大型語言模型與生成式人工智慧在網路媒體中的普及,加劇了對有效自動化事實查核的需求,以協助事實查核員應對日益增加且複雜的錯誤資訊。事實查核的複雜性要求自動化事實查核系統提供解釋,使事實查核員能夠審查其輸出結果。然而,這些解釋應如何與事實查核員的決策和推理過程保持一致,以便有效整合到其工作流程中,目前尚不明確。透過與事實查核專業人員的半結構化訪談,我們彌合了這一差距,具體做法包括:(i) 描述事實查核員如何評估證據、做出決策並解釋其過程;(ii) 探討事實查核員在實踐中如何使用自動化工具;以及 (iii) 確定事實查核員對自動化事實查核工具的解釋需求。研究結果顯示了未滿足的解釋需求,並指出了可重複事實查核解釋的重要標準,這些解釋應追蹤模型的推理路徑、引用具體證據,並突出不確定性和資訊缺口。
English
The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps.

Summary

AI-Generated Summary

PDF42February 18, 2025