ChatPaper.aiChatPaper

Video-Infinity:分散式長視頻生成

Video-Infinity: Distributed Long Video Generation

June 24, 2024
作者: Zhenxiong Tan, Xingyi Yang, Songhua Liu, Xinchao Wang
cs.AI

摘要

擴散模型最近在影片生成方面取得了顯著的成果。儘管表現令人鼓舞,生成的影片通常受限於少數幀,導致片段僅持續幾秒鐘。製作更長影片的主要挑戰包括龐大的記憶體需求和在單個 GPU 上需要的延長處理時間。一個直接的解決方案是將工作負載分散到多個 GPU 上,然而這導致兩個問題:(1) 確保所有 GPU 有效通信以共享時間和上下文信息,以及 (2) 修改現有的影片擴散模型,這些模型通常在短序列上訓練,以生成更長的影片而無需額外訓練。為了應對這些問題,在本文中我們介紹了Video-Infinity,這是一個分佈式推理流程,可實現跨多個 GPU 的並行處理,用於長格式影片生成。具體來說,我們提出了兩個一致的機制:片段並行性和雙範圍注意力。片段並行性優化了跨 GPU 收集和共享上下文信息,從而最小化通信開銷,而雙範圍注意力調節了時間自注意力,以在設備間有效平衡本地和全局上下文。這兩種機制共同努力分配工作負載,實現快速生成長影片。在8 x Nvidia 6000 Ada GPU(48G)配置下,我們的方法在約5分鐘內生成長達2,300幀的影片,使得長影片生成速度比先前方法快100倍。
English
Diffusion models have recently achieved remarkable results for video generation. Despite the encouraging performances, the generated videos are typically constrained to a small number of frames, resulting in clips lasting merely a few seconds. The primary challenges in producing longer videos include the substantial memory requirements and the extended processing time required on a single GPU. A straightforward solution would be to split the workload across multiple GPUs, which, however, leads to two issues: (1) ensuring all GPUs communicate effectively to share timing and context information, and (2) modifying existing video diffusion models, which are usually trained on short sequences, to create longer videos without additional training. To tackle these, in this paper we introduce Video-Infinity, a distributed inference pipeline that enables parallel processing across multiple GPUs for long-form video generation. Specifically, we propose two coherent mechanisms: Clip parallelism and Dual-scope attention. Clip parallelism optimizes the gathering and sharing of context information across GPUs which minimizes communication overhead, while Dual-scope attention modulates the temporal self-attention to balance local and global contexts efficiently across the devices. Together, the two mechanisms join forces to distribute the workload and enable the fast generation of long videos. Under an 8 x Nvidia 6000 Ada GPU (48G) setup, our method generates videos up to 2,300 frames in approximately 5 minutes, enabling long video generation at a speed 100 times faster than the prior methods.

Summary

AI-Generated Summary

PDF302November 29, 2024