ChatPaper.aiChatPaper

论文重构评估:AI撰写论文的呈现质量与虚构内容评估

Paper Reconstruction Evaluation: Evaluating Presentation and Hallucination in AI-written Papers

April 1, 2026
作者: Atsuyuki Miyai, Mashiro Toyooka, Zaiying Zhao, Kenta Watanabe, Toshihiko Yamasaki, Kiyoharu Aizawa
cs.AI

摘要

本文首次提出系统评估框架,用于量化现代编程助手生成论文的质量与风险。尽管AI辅助论文写作已引发广泛关注,但对其质量与潜在风险的严格评估仍显不足,学界对其可靠性的统一认知尚待建立。我们提出论文重构评估法(PaperRecon):首先从现有论文生成概述文件(overview.md),随后由智能体基于概述及少量附加资源生成完整论文,最终将生成结果与原文进行系统对比。该方法将AI生成论文的评估解构为两个正交维度——呈现质量与事实 hallucination,其中呈现质量通过评估量表量化,而事实 hallucination 则采用基于原文的智能体评估机制。为实施评估,我们构建了PaperWrite-Bench基准数据集,涵盖2025年后顶级学术会议出版的51篇跨领域论文。实验结果表明显著权衡关系:虽然ClaudeCode与Codex均随模型升级而改进,但ClaudeCode以平均每篇超10处 hallucination 为代价获得更高呈现质量,而Codex hallucination 较少却呈现质量较低。本研究为建立AI辅助论文写作评估框架、提升学术界对其风险认知迈出重要一步。
English
This paper introduces the first systematic evaluation framework for quantifying the quality and risks of papers written by modern coding agents. While AI-driven paper writing has become a growing concern, rigorous evaluation of the quality and potential risks of AI-written papers remains limited, and a unified understanding of their reliability is still lacking. We introduce Paper Reconstruction Evaluation (PaperRecon), an evaluation framework in which an overview (overview.md) is created from an existing paper, after which an agent generates a full paper based on the overview and minimal additional resources, and the result is subsequently compared against the original paper. PaperRecon disentangles the evaluation of the AI-written papers into two orthogonal dimensions, Presentation and Hallucination, where Presentation is evaluated using a rubric and Hallucination is assessed via agentic evaluation grounded in the original paper source. For evaluation, we introduce PaperWrite-Bench, a benchmark of 51 papers from top-tier venues across diverse domains published after 2025. Our experiments reveal a clear trade-off: while both ClaudeCode and Codex improve with model advances, ClaudeCode achieves higher presentation quality at the cost of more than 10 hallucinations per paper on average, whereas Codex produces fewer hallucinations but lower presentation quality. This work takes a first step toward establishing evaluation frameworks for AI-driven paper writing and improving the understanding of its risks within the research community.
PDF60April 3, 2026