ChatPaper.aiChatPaper

Lumos-1:從統一模型視角探討自回歸視頻生成

Lumos-1: On Autoregressive Video Generation from a Unified Model Perspective

July 11, 2025
作者: Hangjie Yuan, Weihua Chen, Jun Cen, Hu Yu, Jingyun Liang, Shuning Chang, Zhihui Lin, Tao Feng, Pengwei Liu, Jiazheng Xing, Hao Luo, Jiasheng Tang, Fan Wang, Yi Yang
cs.AI

摘要

自回归大语言模型(LLMs)已统一了广泛的语言任务,激发了自回归视频生成的初步探索。现有的自回归视频生成器要么偏离了标准LLM架构,依赖于庞大的外部文本编码器,要么因下一令牌解码而引发难以承受的延迟。本文中,我们介绍了Lumos-1,一种保留LLM架构且仅需最小架构调整的自回归视频生成器。为了在LLMs中注入时空相关性,我们验证了融入3D RoPE的有效性,并诊断了其频谱范围的不均衡问题。因此,我们提出了MM-RoPE,一种既保留原始文本RoPE,又为多模态时空数据建模提供全面频谱及缩放3D位置的RoPE方案。此外,Lumos-1采用了一种遵循帧内双向性与帧间时序因果性的令牌依赖策略。基于此依赖策略,我们识别了由空间信息冗余导致的帧间损失失衡问题,并通过提出自回归离散扩散强制(AR-DF)予以解决。AR-DF在训练期间引入时序管状掩码,并配合推理时的掩码策略,以避免质量下降。通过采用内存高效的训练技术,我们仅用48个GPU预训练了Lumos-1,在GenEval上实现了与EMU3相当的性能,在VBench-I2V上媲美COSMOS-Video2World,在VBench-T2V上比肩OpenSoraPlan。代码与模型已发布于https://github.com/alibaba-damo-academy/Lumos。
English
Autoregressive large language models (LLMs) have unified a vast range of language tasks, inspiring preliminary efforts in autoregressive video generation. Existing autoregressive video generators either diverge from standard LLM architectures, depend on bulky external text encoders, or incur prohibitive latency due to next-token decoding. In this paper, we introduce Lumos-1, an autoregressive video generator that retains the LLM architecture with minimal architectural modifications. To inject spatiotemporal correlations in LLMs, we identify the efficacy of incorporating 3D RoPE and diagnose its imbalanced frequency spectrum ranges. Therefore, we propose MM-RoPE, a RoPE scheme that preserves the original textual RoPE while providing comprehensive frequency spectra and scaled 3D positions for modeling multimodal spatiotemporal data. Moreover, Lumos-1 resorts to a token dependency strategy that obeys intra-frame bidirectionality and inter-frame temporal causality. Based on this dependency strategy, we identify the issue of frame-wise loss imbalance caused by spatial information redundancy and solve it by proposing Autoregressive Discrete Diffusion Forcing (AR-DF). AR-DF introduces temporal tube masking during training with a compatible inference-time masking policy to avoid quality degradation. By using memory-efficient training techniques, we pre-train Lumos-1 on only 48 GPUs, achieving performance comparable to EMU3 on GenEval, COSMOS-Video2World on VBench-I2V, and OpenSoraPlan on VBench-T2V. Code and models are available at https://github.com/alibaba-damo-academy/Lumos.
PDF263July 14, 2025