ChatPaper.aiChatPaper

故事到提案:结构化科研论文写作框架

Story2Proposal: A Scaffold for Structured Scientific Paper Writing

March 28, 2026
作者: Zhuoyang Qian, Wei Shi, Xu Lin, Li Ling, Meng Luo, Ziming Wang, Zhiwei Zhang, Tengyue Xu, Gaoge Liu, Zhentao Zhang, Shuo Zhang, Ziqi Wang, Zheng Feng, Yan Luo, Shu Xu, Yongjin Chen, Zhibo Feng, Zhuo Chen, Bruce Yuan, Biao Wu, Harry Wang, Kris Chen
cs.AI

摘要

生成科学手稿需在文档全生命周期中保持叙事逻辑、实验证据与视觉要素的协同一致。现有语言模型生成流程依赖无约束文本合成,仅在生成后实施验证,常导致结构偏移、图表缺失及跨章节不一致问题。我们提出Story2Proposal框架,采用契约约束的多智能体架构,通过共享视觉契约下的协同智能体将研究故事转化为结构化手稿。该系统以追踪章节结构和注册视觉元素的契约状态为核心,组织架构师、撰写者、优化器与渲染器四类智能体,同时评估智能体在"生成-评估-适配"循环中提供反馈,动态更新生成契约。基于Jericho研究语料库的实验表明:在GPT、Claude、Gemini、Qwen四种基座模型上,Story2Proposal的专家评估得分达6.145,较DirectChat的3.963提升2.182分;相较于结构化基线方法Fars,本框架平均得分5.705优于后者的5.197,显示出更优的结构一致性与视觉对齐能力。
English
Generating scientific manuscripts requires maintaining alignment between narrative reasoning, experimental evidence, and visual artifacts across the document lifecycle. Existing language-model generation pipelines rely on unconstrained text synthesis with validation applied only after generation, often producing structural drift, missing figures or tables, and cross-section inconsistencies. We introduce Story2Proposal, a contract-governed multi-agent framework that converts a research story into a structured manuscript through coordinated agents operating under a persistent shared visual contract. The system organizes architect, writer, refiner, and renderer agents around a contract state that tracks section structure and registered visual elements, while evaluation agents supply feedback in a generate evaluate adapt loop that updates the contract during generation. Experiments on tasks derived from the Jericho research corpus show that Story2Proposal achieved an expert evaluation score of 6.145 versus 3.963 for DirectChat (+2.182) across GPT, Claude, Gemini, and Qwen backbones. Compared with the structured generation baseline Fars, Story2Proposal obtained an average score of 5.705 versus 5.197, indicating improved structural consistency and visual alignment.
PDF92April 1, 2026