引用审计:你引用了文献,但真的阅读了吗?大语言模型时代科学参考文献验证基准
CiteAudit: You Cited It, But Did You Read It? A Benchmark for Verifying Scientific References in the LLM Era
February 26, 2026
作者: Zhengqing Yuan, Kaiwen Shi, Zheyuan Zhang, Lichao Sun, Nitesh V. Chawla, Yanfang Ye
cs.AI
摘要
科学研究依赖准确引用来确保成果归属与学术诚信,然而大型语言模型(LLMs)带来了新的风险:生成的参考文献看似合理却对应着不存在的出版物。此类虚构引用已在多个顶级机器学习会议的投稿和录用论文中被发现,暴露出同行评审机制的脆弱性。与此同时,快速增长的参考文献列表使得人工核查难以实施,而现有自动化工具对嘈杂异构的引用格式适应性差,且缺乏标准化评估体系。我们提出了首个针对科学写作中虚构引用的综合基准与检测框架。通过多智能体验证流程,我们将引文核查分解为论点提取、证据检索、段落匹配、逻辑推理和校准判断等步骤,系统性评估引用来源是否真实支撑其论点。我们构建了跨领域的大规模人工验证数据集,并定义了引用忠实度与证据对齐的统一度量标准。基于前沿大型语言模型的实验揭示了大量引用错误,表明我们的框架在准确性与可解释性上显著优于现有方法。这项研究为LLM时代的引文审计提供了首个可扩展的基础设施,并为提升科学参考文献的可信度提供了实用工具。
English
Scientific research relies on accurate citation for attribution and integrity, yet large language models (LLMs) introduce a new risk: fabricated references that appear plausible but correspond to no real publications. Such hallucinated citations have already been observed in submissions and accepted papers at major machine learning venues, exposing vulnerabilities in peer review. Meanwhile, rapidly growing reference lists make manual verification impractical, and existing automated tools remain fragile to noisy and heterogeneous citation formats and lack standardized evaluation. We present the first comprehensive benchmark and detection framework for hallucinated citations in scientific writing. Our multi-agent verification pipeline decomposes citation checking into claim extraction, evidence retrieval, passage matching, reasoning, and calibrated judgment to assess whether a cited source truly supports its claim. We construct a large-scale human-validated dataset across domains and define unified metrics for citation faithfulness and evidence alignment. Experiments with state-of-the-art LLMs reveal substantial citation errors and show that our framework significantly outperforms prior methods in both accuracy and interpretability. This work provides the first scalable infrastructure for auditing citations in the LLM era and practical tools to improve the trustworthiness of scientific references.