VChain:視覺思維鏈在視頻生成推理中的應用
VChain: Chain-of-Visual-Thought for Reasoning in Video Generation
October 6, 2025
作者: Ziqi Huang, Ning Yu, Gordon Chen, Haonan Qiu, Paul Debevec, Ziwei Liu
cs.AI
摘要
近期视频生成模型虽能产出流畅且视觉吸引力强的片段,但在合成具有连贯因果链的复杂动态场景时仍显不足。准确建模随时间推移的视觉结果与状态转换,仍是核心挑战。相比之下,大型语言及多模态模型(如GPT-4o)展现出强大的视觉状态推理与未来预测能力。为融合这些优势,我们提出了VChain,一种新颖的推理时视觉思维链框架,它将多模态模型中的视觉推理信号注入视频生成过程。具体而言,VChain包含一条专用管道,利用大型多模态模型生成一组稀疏的关键帧作为快照,进而仅在这些关键时刻指导预训练视频生成器的稀疏推理时微调。我们的方法调优高效,引入额外开销极小,并避免了密集监督。在复杂多步骤场景上的大量实验表明,VChain显著提升了生成视频的质量。
English
Recent video generation models can produce smooth and visually appealing
clips, but they often struggle to synthesize complex dynamics with a coherent
chain of consequences. Accurately modeling visual outcomes and state
transitions over time remains a core challenge. In contrast, large language and
multimodal models (e.g., GPT-4o) exhibit strong visual state reasoning and
future prediction capabilities. To bridge these strengths, we introduce VChain,
a novel inference-time chain-of-visual-thought framework that injects visual
reasoning signals from multimodal models into video generation. Specifically,
VChain contains a dedicated pipeline that leverages large multimodal models to
generate a sparse set of critical keyframes as snapshots, which are then used
to guide the sparse inference-time tuning of a pre-trained video generator only
at these key moments. Our approach is tuning-efficient, introduces minimal
overhead and avoids dense supervision. Extensive experiments on complex,
multi-step scenarios show that VChain significantly enhances the quality of
generated videos.