FlashMotion:基于轨迹引导的少步可控视频生成
FlashMotion: Few-Step Controllable Video Generation with Trajectory Guidance
March 12, 2026
作者: Quanhao Li, Zhen Xing, Rui Wang, Haidong Cao, Qi Dai, Daoguo Dong, Zuxuan Wu
cs.AI
摘要
轨迹可控视频生成技术近期取得显著进展。现有方法主要采用基于适配器的架构,通过预设轨迹实现精确运动控制。然而,这些方法均依赖多步去噪过程,导致显著的时间冗余和计算开销。虽然现有视频蒸馏技术能成功将多步生成器压缩为少步模型,但直接应用于轨迹可控视频生成会导致视频质量与轨迹精度明显下降。为弥补这一差距,我们提出FlashMotion——一种专为少步轨迹可控视频生成设计的新型训练框架。我们首先在多步视频生成器上训练轨迹适配器以实现精确轨迹控制,随后将生成器蒸馏为少步版本以加速视频生成,最后采用融合扩散目标与对抗目标的混合策略对适配器进行微调,使其与少步生成器协同生成高质量、高轨迹精度的视频。为进行评估,我们构建了FlashBench基准测试集,该基准通过可变数量前景物体来衡量长序列轨迹可控视频生成的视频质量与轨迹精度。在两种适配器架构上的实验表明,FlashMotion在视觉质量与轨迹一致性方面均优于现有视频蒸馏方法及传统多步模型。
English
Recent advances in trajectory-controllable video generation have achieved remarkable progress. Previous methods mainly use adapter-based architectures for precise motion control along predefined trajectories. However, all these methods rely on a multi-step denoising process, leading to substantial time redundancy and computational overhead. While existing video distillation methods successfully distill multi-step generators into few-step, directly applying these approaches to trajectory-controllable video generation results in noticeable degradation in both video quality and trajectory accuracy. To bridge this gap, we introduce FlashMotion, a novel training framework designed for few-step trajectory-controllable video generation. We first train a trajectory adapter on a multi-step video generator for precise trajectory control. Then, we distill the generator into a few-step version to accelerate video generation. Finally, we finetune the adapter using a hybrid strategy that combines diffusion and adversarial objectives, aligning it with the few-step generator to produce high-quality, trajectory-accurate videos. For evaluation, we introduce FlashBench, a benchmark for long-sequence trajectory-controllable video generation that measures both video quality and trajectory accuracy across varying numbers of foreground objects. Experiments on two adapter architectures show that FlashMotion surpasses existing video distillation methods and previous multi-step models in both visual quality and trajectory consistency.