ChatPaper.aiChatPaper

OmniTransfer:時空影片遷移的一體化框架

OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer

January 20, 2026
作者: Pengze Zhang, Yanze Wu, Mengtian Li, Xu Bai, Songtao Zhao, Fulong Ye, Chong Mou, Xinghui Li, Zhuowei Chen, Qian He, Mingyuan Gao
cs.AI

摘要

影片能同時捕捉空間與時間動態,相較於圖像或文字能傳達更豐富的資訊。然而現有多數影片自訂方法依賴參考圖像或任務特定的時間先驗,未能充分利用影片內在的豐富時空資訊,從而限制了影片生成的靈活性與泛化能力。為解決這些局限,我們提出OmniTransfer——一個統一的時空影片遷移框架。該框架透過跨影格的多視角資訊增強外觀一致性,並利用時間線索實現細粒度時間控制。為統一各類影片遷移任務,OmniTransfer包含三項關鍵設計:任務感知位置偏置機制,自適應利用參考影片資訊以提升時間對齊度或外觀一致性;參考解耦因果學習,分離參考與目標分支以實現精準參考遷移並提升效率;任務自適應多模態對齊,運用多模態語義引導動態區分與處理不同任務。大量實驗表明,OmniTransfer在外觀遷移(身份與風格)與時間遷移(鏡頭運動與視覺特效)方面均優於現有方法,同時在未使用姿勢引導的情況下,於動作遷移任務中達到與姿勢引導方法相當的效果,為靈活且高保真的影片生成建立了新範式。
English
Videos convey richer information than images or text, capturing both spatial and temporal dynamics. However, most existing video customization methods rely on reference images or task-specific temporal priors, failing to fully exploit the rich spatio-temporal information inherent in videos, thereby limiting flexibility and generalization in video generation. To address these limitations, we propose OmniTransfer, a unified framework for spatio-temporal video transfer. It leverages multi-view information across frames to enhance appearance consistency and exploits temporal cues to enable fine-grained temporal control. To unify various video transfer tasks, OmniTransfer incorporates three key designs: Task-aware Positional Bias that adaptively leverages reference video information to improve temporal alignment or appearance consistency; Reference-decoupled Causal Learning separating reference and target branches to enable precise reference transfer while improving efficiency; and Task-adaptive Multimodal Alignment using multimodal semantic guidance to dynamically distinguish and tackle different tasks. Extensive experiments show that OmniTransfer outperforms existing methods in appearance (ID and style) and temporal transfer (camera movement and video effects), while matching pose-guided methods in motion transfer without using pose, establishing a new paradigm for flexible, high-fidelity video generation.
PDF294January 22, 2026