ChatPaper.aiChatPaper

APRES:一种智能论文修订与评估系统

APRES: An Agentic Paper Revision and Evaluation System

March 3, 2026
作者: Bingchen Zhao, Jenny Zhang, Chenxi Whitehouse, Minqi Jiang, Michael Shvartsman, Abhishek Charnalia, Despoina Magka, Tatiana Shavrina, Derek Dunfield, Oisin Mac Aodha, Yoram Bachrach
cs.AI

摘要

科学发现必须通过清晰传达才能实现其全部潜力。若缺乏有效沟通,即便是最具突破性的研究成果也可能面临被忽视或误解的风险。目前科学家主要通过同行评审机制来交流工作并获取学界反馈,但现行体系常因评审意见不一致而阻碍论文修改完善,限制其潜在影响力。本文提出一种基于大语言模型的新型方法APRES,能够依据评估标准自动优化科研论文文本。我们的自动化方法发掘出对论文未来引用量具有高预测力的评估体系,并将其与APRES整合为自动修订系统以提升论文质量与影响力。关键在于,这一过程需在不改变核心科学内容的前提下完成。实验证明,APRES将未来引用预测的平均绝对误差较次优基线降低了19.6%,且经修订的论文在79%的情况下获得人类专家评审的青睐。我们的研究为将大语言模型作为作者投稿前压力测试工具提供了有力实证支持。需要强调的是,本研究旨在增强而非取代人类评审的核心作用——毕竟唯有人类才能甄别真正重要的科学发现,引领科学事业推动认知进步、造福人类社会。
English
Scientific discoveries must be communicated clearly to realize their full potential. Without effective communication, even the most groundbreaking findings risk being overlooked or misunderstood. The primary way scientists communicate their work and receive feedback from the community is through peer review. However, the current system often provides inconsistent feedback between reviewers, ultimately hindering the improvement of a manuscript and limiting its potential impact. In this paper, we introduce a novel method APRES powered by Large Language Models (LLMs) to update a scientific papers text based on an evaluation rubric. Our automated method discovers a rubric that is highly predictive of future citation counts, and integrate it with APRES in an automated system that revises papers to enhance their quality and impact. Crucially, this objective should be met without altering the core scientific content. We demonstrate the success of APRES, which improves future citation prediction by 19.6% in mean averaged error over the next best baseline, and show that our paper revision process yields papers that are preferred over the originals by human expert evaluators 79% of the time. Our findings provide strong empirical support for using LLMs as a tool to help authors stress-test their manuscripts before submission. Ultimately, our work seeks to augment, not replace, the essential role of human expert reviewers, for it should be humans who discern which discoveries truly matter, guiding science toward advancing knowledge and enriching lives.
PDF21May 8, 2026