ChatPaper.aiChatPaper

视觉自回归建模:通过下一尺度预测实现可扩展图像生成

Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction

April 3, 2024
作者: Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, Liwei Wang
cs.AI

摘要

我们提出了视觉自回归建模(Visual AutoRegressive modeling, VAR),这是一种新一代的范式,它将图像上的自回归学习重新定义为从粗到细的“下一尺度预测”或“下一分辨率预测”,与标准的逐行扫描“下一标记预测”方法有所不同。这种简单直观的策略使得自回归(AR)变压器能够快速学习视觉分布并具有良好的泛化能力:VAR首次使得AR模型在图像生成方面超越了扩散变压器。在ImageNet 256x256基准测试中,VAR显著提升了AR基线,将Frechet inception距离(FID)从18.65降至1.80,inception分数(IS)从80.4提升至356.4,推理速度大约快了20倍。实证结果还证实,VAR在图像质量、推理速度、数据效率和可扩展性等多个维度上均优于扩散变压器(DiT)。扩展VAR模型显示出与大型语言模型(LLMs)类似的幂律缩放规律,线性相关系数接近-0.998,提供了有力证据。VAR在下游任务中还展示了零样本泛化能力,包括图像修复、外扩和编辑。这些结果表明,VAR初步具备了LLMs的两个重要特性:缩放定律和零样本任务泛化。我们已公开所有模型和代码,以促进对AR/VAR模型在视觉生成和统一学习方面的探索。
English
We present Visual AutoRegressive modeling (VAR), a new generation paradigm that redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolution prediction", diverging from the standard raster-scan "next-token prediction". This simple, intuitive methodology allows autoregressive (AR) transformers to learn visual distributions fast and generalize well: VAR, for the first time, makes AR models surpass diffusion transformers in image generation. On ImageNet 256x256 benchmark, VAR significantly improve AR baseline by improving Frechet inception distance (FID) from 18.65 to 1.80, inception score (IS) from 80.4 to 356.4, with around 20x faster inference speed. It is also empirically verified that VAR outperforms the Diffusion Transformer (DiT) in multiple dimensions including image quality, inference speed, data efficiency, and scalability. Scaling up VAR models exhibits clear power-law scaling laws similar to those observed in LLMs, with linear correlation coefficients near -0.998 as solid evidence. VAR further showcases zero-shot generalization ability in downstream tasks including image in-painting, out-painting, and editing. These results suggest VAR has initially emulated the two important properties of LLMs: Scaling Laws and zero-shot task generalization. We have released all models and codes to promote the exploration of AR/VAR models for visual generation and unified learning.

Summary

AI-Generated Summary

PDF713November 26, 2024