ChatPaper.aiChatPaper

视频生成中的运动归因

Motion Attribution for Video Generation

January 13, 2026
作者: Xindi Wu, Despoina Paschalidou, Jun Gao, Antonio Torralba, Laura Leal-Taixé, Olga Russakovsky, Sanja Fidler, Jonathan Lorraine
cs.AI

摘要

尽管视频生成模型发展迅速,但数据对运动特征的影响机制尚不明确。本文提出Motive(视频生成运动归因框架),这是一种基于梯度、以运动为核心的数据归因方法,可适配现代大规模高质量视频数据集与模型。通过该框架,我们能够精准识别微调数据集中改善或损害时序动态特性的视频片段。Motive通过运动加权损失掩码将时序动态与静态表观特征解耦,实现了高效可扩展的专项运动影响力计算。在文本到视频模型中,该框架能有效识别对运动特征具有显著影响的数据片段,并据此指导数据筛选,从而提升时序一致性与物理合理性。采用Motive筛选的高影响力数据后,我们的方法在VBench评测中同时提升了运动平滑度与动态幅度指标,相比预训练基础模型获得74.1%的人类偏好胜率。据我们所知,这是首个针对视频生成模型进行运动特征(而非视觉表观)归因的框架,并首次将其应用于微调数据筛选。
English
Despite the rapid progress of video generation models, the role of data in influencing motion is poorly understood. We present Motive (MOTIon attribution for Video gEneration), a motion-centric, gradient-based data attribution framework that scales to modern, large, high-quality video datasets and models. We use this to study which fine-tuning clips improve or degrade temporal dynamics. Motive isolates temporal dynamics from static appearance via motion-weighted loss masks, yielding efficient and scalable motion-specific influence computation. On text-to-video models, Motive identifies clips that strongly affect motion and guides data curation that improves temporal consistency and physical plausibility. With Motive-selected high-influence data, our method improves both motion smoothness and dynamic degree on VBench, achieving a 74.1% human preference win rate compared with the pretrained base model. To our knowledge, this is the first framework to attribute motion rather than visual appearance in video generative models and to use it to curate fine-tuning data.
PDF61January 15, 2026