ChatPaper.aiChatPaper

论文重构评估:AI撰写论文的呈现质量与虚构内容评测

Paper Reconstruction Evaluation: Evaluating Presentation and Hallucination in AI-written Papers

April 1, 2026
作者: Atsuyuki Miyai, Mashiro Toyooka, Zaiying Zhao, Kenta Watanabe, Toshihiko Yamasaki, Kiyoharu Aizawa
cs.AI

摘要

本文首次提出系統性評估框架,用以量化現代編碼智能體撰寫論文的質量與風險。儘管AI驅動的論文寫作已引發廣泛擔憂,但對其質量與潛在風險的嚴謹評估仍顯不足,學界亦缺乏對這類論文可靠性的統一認知。我們提出「論文重構評估法」(PaperRecon):首先從既有論文中生成概述(overview.md),再由智能體基於該概述與極少量附加資源生成完整論文,最終將生成結果與原文進行比對。該框架將AI生成論文的評估解構為兩個正交維度——「呈現質量」與「虛構程度」,前者通過評分量表衡量,後者則基於原文由智能體進行錨定評估。為實施評估,我們構建包含51篇論文的PaperWrite-Bench基準數據集,所有論文均來自2025年後頂級期刊且涵蓋多元領域。實驗揭示明顯的權衡現象:雖然ClaudeCode與Codex隨模型升級均有進步,但ClaudeCode以平均每篇超過10處虛構內容為代價獲得更高呈現質量,而Codex虛構較少卻呈現質量較低。本研究為建立AI論文寫作評估框架、提升學界對其風險認知邁出關鍵第一步。
English
This paper introduces the first systematic evaluation framework for quantifying the quality and risks of papers written by modern coding agents. While AI-driven paper writing has become a growing concern, rigorous evaluation of the quality and potential risks of AI-written papers remains limited, and a unified understanding of their reliability is still lacking. We introduce Paper Reconstruction Evaluation (PaperRecon), an evaluation framework in which an overview (overview.md) is created from an existing paper, after which an agent generates a full paper based on the overview and minimal additional resources, and the result is subsequently compared against the original paper. PaperRecon disentangles the evaluation of the AI-written papers into two orthogonal dimensions, Presentation and Hallucination, where Presentation is evaluated using a rubric and Hallucination is assessed via agentic evaluation grounded in the original paper source. For evaluation, we introduce PaperWrite-Bench, a benchmark of 51 papers from top-tier venues across diverse domains published after 2025. Our experiments reveal a clear trade-off: while both ClaudeCode and Codex improve with model advances, ClaudeCode achieves higher presentation quality at the cost of more than 10 hallucinations per paper on average, whereas Codex produces fewer hallucinations but lower presentation quality. This work takes a first step toward establishing evaluation frameworks for AI-driven paper writing and improving the understanding of its risks within the research community.
PDF60April 3, 2026