多模態深度研究員:基於代理框架從零生成圖文交織的研究報告
Multimodal DeepResearcher: Generating Text-Chart Interleaved Reports From Scratch with Agentic Framework
June 3, 2025
作者: Zhaorui Yang, Bo Pan, Han Wang, Yiyao Wang, Xingyu Liu, Minfeng Zhu, Bo Zhang, Wei Chen
cs.AI
摘要
視覺化在有效傳達概念與資訊方面扮演著關鍵角色。近期在推理與檢索增強生成領域的進展,使得大型語言模型(LLMs)能夠進行深度研究並生成全面報告。儘管已有顯著進步,現有的深度研究框架主要集中於生成純文字內容,對於自動生成交織文字與視覺化的探索仍顯不足。這項新穎任務在設計資訊豐富的視覺化並將其有效整合至文字報告中,面臨了主要挑戰。為應對這些挑戰,我們提出了視覺化形式化描述(Formal Description of Visualization, FDV),這是一種圖表的結構化文字表示法,使LLMs能夠學習並生成多樣且高品質的視覺化。基於此表示法,我們引入了多模態深度研究員(Multimodal DeepResearcher),這是一個將任務分解為四個階段的代理框架:(1) 研究、(2) 範例報告文字化、(3) 規劃,以及(4) 多模態報告生成。針對生成的多模態報告評估,我們開發了多模態報告基準(MultimodalReportBench),其中包含100個多樣主題作為輸入,並配備5項專用指標。跨模型與評估方法的廣泛實驗證明了多模態深度研究員的有效性。值得注意的是,使用相同的Claude 3.7 Sonnet模型,多模態深度研究員相較於基準方法,達成了82%的整體勝率。
English
Visualizations play a crucial part in effective communication of concepts and
information. Recent advances in reasoning and retrieval augmented generation
have enabled Large Language Models (LLMs) to perform deep research and generate
comprehensive reports. Despite its progress, existing deep research frameworks
primarily focus on generating text-only content, leaving the automated
generation of interleaved texts and visualizations underexplored. This novel
task poses key challenges in designing informative visualizations and
effectively integrating them with text reports. To address these challenges, we
propose Formal Description of Visualization (FDV), a structured textual
representation of charts that enables LLMs to learn from and generate diverse,
high-quality visualizations. Building on this representation, we introduce
Multimodal DeepResearcher, an agentic framework that decomposes the task into
four stages: (1) researching, (2) exemplar report textualization, (3) planning,
and (4) multimodal report generation. For the evaluation of generated
multimodal reports, we develop MultimodalReportBench, which contains 100
diverse topics served as inputs along with 5 dedicated metrics. Extensive
experiments across models and evaluation methods demonstrate the effectiveness
of Multimodal DeepResearcher. Notably, utilizing the same Claude 3.7 Sonnet
model, Multimodal DeepResearcher achieves an 82\% overall win rate over the
baseline method.