ChatPaper.aiChatPaper

局部感知并行解码技术助力高效自回归图像生成

Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation

July 2, 2025
作者: Zhuoyang Zhang, Luke J. Huang, Chengyue Wu, Shang Yang, Kelly Peng, Yao Lu, Song Han
cs.AI

摘要

我们提出了局部感知并行解码(Locality-aware Parallel Decoding, LPD)技术,以加速自回归图像生成。传统的自回归图像生成依赖于下一块预测,这一内存密集型过程导致了高延迟。现有研究尝试通过转向多块预测来并行化下一块预测,从而加速生成过程,但仅实现了有限的并行化。为了在保持生成质量的同时实现高度并行化,我们引入了两项关键技术:(1)灵活并行化自回归建模,这是一种新颖的架构,支持任意生成顺序和并行化程度。它利用可学习的位置查询标记来指导目标位置的生成,同时确保并发生成的标记之间相互可见,以实现一致的并行解码。(2)局部感知生成顺序,这是一种新颖的调度策略,通过分组最小化组内依赖并最大化上下文支持,从而提升生成质量。凭借这些设计,我们在ImageNet类别条件生成任务中,将生成步骤从256减少到20(256×256分辨率)以及从1024减少到48(512×512分辨率),且不牺牲生成质量,同时实现了比以往并行化自回归模型至少3.4倍的延迟降低。
English
We present Locality-aware Parallel Decoding (LPD) to accelerate autoregressive image generation. Traditional autoregressive image generation relies on next-patch prediction, a memory-bound process that leads to high latency. Existing works have tried to parallelize next-patch prediction by shifting to multi-patch prediction to accelerate the process, but only achieved limited parallelization. To achieve high parallelization while maintaining generation quality, we introduce two key techniques: (1) Flexible Parallelized Autoregressive Modeling, a novel architecture that enables arbitrary generation ordering and degrees of parallelization. It uses learnable position query tokens to guide generation at target positions while ensuring mutual visibility among concurrently generated tokens for consistent parallel decoding. (2) Locality-aware Generation Ordering, a novel schedule that forms groups to minimize intra-group dependencies and maximize contextual support, enhancing generation quality. With these designs, we reduce the generation steps from 256 to 20 (256times256 res.) and 1024 to 48 (512times512 res.) without compromising quality on the ImageNet class-conditional generation, and achieving at least 3.4times lower latency than previous parallelized autoregressive models.
PDF101July 3, 2025