**基于叙事驱动的论文转幻灯片生成系统:ArcDeck**
Narrative-Driven Paper-to-Slide Generation via ArcDeck
April 13, 2026
作者: Tarik Can Ozden, Sachidanand VS, Furkan Horoz, Ozgur Kara, Junho Kim, James Matthew Rehg
cs.AI
摘要
我们推出ArcDeck——一个将论文转幻灯片任务构建为结构化叙事重建的多智能体框架。与现有直接概括原始文本生成幻灯片的方法不同,ArcDeck显式建模源论文的逻辑脉络。该框架首先解析输入内容以构建语篇树并生成全局纲领文档,确保高层意图得以保留。这些结构化先验信息随后指导迭代式多智能体优化流程,由专业化智能体在最终视觉布局设计前,持续对演示纲要进行批判性修订。为评估该方法,我们还构建了ArcBench基准数据集,该全新整理的学术论文-幻灯片配对基准实验表明,显式语篇建模与角色化智能体协作相结合,能显著提升生成演示文稿的叙事流畅度与逻辑连贯性。
English
We introduce ArcDeck, a multi-agent framework that formulates paper-to-slide generation as a structured narrative reconstruction task. Unlike existing methods that directly summarize raw text into slides, ArcDeck explicitly models the source paper's logical flow. It first parses the input to construct a discourse tree and establish a global commitment document, ensuring the high-level intent is preserved. These structural priors then guide an iterative multi-agent refinement process, where specialized agents iteratively critique and revise the presentation outline before rendering the final visual layouts and designs. To evaluate our approach, we also introduce ArcBench, a newly curated benchmark of academic paper-slide pairs. Experimental results demonstrate that explicit discourse modeling, combined with role-specific agent coordination, significantly improves the narrative flow and logical coherence of the generated presentations.