ChatPaper.aiChatPaper

轨迹移动器:视频中物体轨迹的生成式运动

TrajectoryMover: Generative Movement of Object Trajectories in Videos

March 31, 2026
作者: Kiran Chhatre, Hyeonho Jeong, Yulia Gryaditskaya, Christopher E. Peters, Chun-Hao Paul Huang, Paul Guerrero
cs.AI

摘要

生成式视频编辑技术已实现多种针对短视频的直观编辑操作,这些操作以往即使对非专业编辑人员而言也难以完成。现有方法主要侧重于规范视频中物体的三维或二维运动轨迹,或改变物体及场景的外观,同时保持视频的合理性与特征一致性。然而,目前仍缺乏能够移动物体三维运动轨迹的方法——即在保持物体相对三维运动的前提下实现位移操作。该技术的主要挑战在于获取此类场景的配对视频数据。以往方法通常依赖巧妙的数据生成方案,从非配对视频中构建合理的配对数据,但当配对视频中某一方难以从另一方生成时,该方法便难以奏效。为此,我们推出了TrajectoryAtlas——一种用于大规模合成配对视频数据的新型生成流程,以及基于该数据微调的视频生成器TrajectoryMover。实验证明,该方法成功实现了对物体运动轨迹的生成式位移操作。项目页面:https://chhatrekiran.github.io/trajectorymover
English
Generative video editing has enabled several intuitive editing operations for short video clips that would previously have been difficult to achieve, especially for non-expert editors. Existing methods focus on prescribing an object's 3D or 2D motion trajectory in a video, or on altering the appearance of an object or a scene, while preserving both the video's plausibility and identity. Yet a method to move an object's 3D motion trajectory in a video, i.e., moving an object while preserving its relative 3D motion, is currently still missing. The main challenge lies in obtaining paired video data for this scenario. Previous methods typically rely on clever data generation approaches to construct plausible paired data from unpaired videos, but this approach fails if one of the videos in a pair can not easily be constructed from the other. Instead, we introduce TrajectoryAtlas, a new data generation pipeline for large-scale synthetic paired video data and a video generator TrajectoryMover fine-tuned with this data. We show that this successfully enables generative movement of object trajectories. Project page: https://chhatrekiran.github.io/trajectorymover
PDF01April 2, 2026