ChatPaper.aiChatPaper

OmniScript:面向长镜头电影化视频的音视觉剧本生成框架

OmniScript: Towards Audio-Visual Script Generation for Long-Form Cinematic Video

April 13, 2026
作者: Junfu Pu, Yuxin Chen, Teng Wang, Ying Shan
cs.AI

摘要

当前多模态大语言模型(MLLMs)在短视频理解方面展现出卓越能力,但将长篇幅电影视频转化为具有时间锚点的详细剧本仍面临重大挑战。本文首创视频转剧本(V2S)任务,旨在生成包含角色动作、对话、表情及音频提示的层次化分场景剧本。为此,我们构建了首个由人工标注的基准数据集,并提出一种时序感知的层次化评估框架。此外,我们推出OmniScript——一个专为长篇幅叙事理解设计的80亿参数全模态(视听)语言模型。该模型通过渐进式训练流程实现:先利用思维链监督微调进行情节与角色推理,再采用基于时序分段奖励的强化学习。大量实验表明,尽管参数规模精简,OmniScript在时序定位和多字段语义准确性上显著超越更大规模的开源模型,并与Gemini 3-Pro等顶尖专有模型性能相当。
English
Current multimodal large language models (MLLMs) have demonstrated remarkable capabilities in short-form video understanding, yet translating long-form cinematic videos into detailed, temporally grounded scripts remains a significant challenge. This paper introduces the novel video-to-script (V2S) task, aiming to generate hierarchical, scene-by-scene scripts encompassing character actions, dialogues, expressions, and audio cues. To facilitate this, we construct a first-of-its-kind human-annotated benchmark and propose a temporally-aware hierarchical evaluation framework. Furthermore, we present OmniScript, an 8B-parameter omni-modal (audio-visual) language model tailored for long-form narrative comprehension. OmniScript is trained via a progressive pipeline that leverages chain-of-thought supervised fine-tuning for plot and character reasoning, followed by reinforcement learning using temporally segmented rewards. Extensive experiments demonstrate that despite its parameter efficiency, OmniScript significantly outperforms larger open-source models and achieves performance comparable to state-of-the-art proprietary models, including Gemini 3-Pro, in both temporal localization and multi-field semantic accuracy.
PDF62April 22, 2026