ChatPaper.aiChatPaper

長上下文調適用於視頻生成

Long Context Tuning for Video Generation

March 13, 2025
作者: Yuwei Guo, Ceyuan Yang, Ziyan Yang, Zhibei Ma, Zhijie Lin, Zhenheng Yang, Dahua Lin, Lu Jiang
cs.AI

摘要

近期在視頻生成領域的進展,已能利用可擴展的擴散變換器產生逼真、長達一分鐘的單鏡頭視頻。然而,現實世界的敘事視頻需要多鏡頭場景,且各鏡頭間需保持視覺與動態的一致性。本研究提出長上下文調優(Long Context Tuning, LCT),這是一種訓練範式,旨在擴展預訓練單鏡頭視頻擴散模型的上下文窗口,使其能直接從數據中學習場景級別的一致性。我們的方法將全注意力機制從單個鏡頭擴展至涵蓋場景內所有鏡頭,結合交錯的3D位置嵌入與異步噪聲策略,實現了無需額外參數的聯合與自回歸鏡頭生成。經過LCT雙向注意力調整的模型,可進一步通過上下文因果注意力進行微調,促進基於高效KV緩存的自回歸生成。實驗表明,經過LCT的單鏡頭模型能夠生成連貫的多鏡頭場景,並展現出包括組合生成與互動鏡頭擴展在內的新興能力,為更實用的視覺內容創作鋪平了道路。更多詳情請參見https://guoyww.github.io/projects/long-context-video/。
English
Recent advances in video generation can produce realistic, minute-long single-shot videos with scalable diffusion transformers. However, real-world narrative videos require multi-shot scenes with visual and dynamic consistency across shots. In this work, we introduce Long Context Tuning (LCT), a training paradigm that expands the context window of pre-trained single-shot video diffusion models to learn scene-level consistency directly from data. Our method expands full attention mechanisms from individual shots to encompass all shots within a scene, incorporating interleaved 3D position embedding and an asynchronous noise strategy, enabling both joint and auto-regressive shot generation without additional parameters. Models with bidirectional attention after LCT can further be fine-tuned with context-causal attention, facilitating auto-regressive generation with efficient KV-cache. Experiments demonstrate single-shot models after LCT can produce coherent multi-shot scenes and exhibit emerging capabilities, including compositional generation and interactive shot extension, paving the way for more practical visual content creation. See https://guoyww.github.io/projects/long-context-video/ for more details.

Summary

AI-Generated Summary

PDF142March 14, 2025