ChatPaper.aiChatPaper

流体:使用连续标记扩展自回归文本到图像生成模型

Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens

October 17, 2024
作者: Lijie Fan, Tianhong Li, Siyang Qin, Yuanzhen Li, Chen Sun, Michael Rubinstein, Deqing Sun, Kaiming He, Yonglong Tian
cs.AI

摘要

在视觉领域,扩展自回归模型并未像大型语言模型那样带来明显好处。本研究探讨了这一扩展问题,重点关注文本到图像生成中的两个关键因素:模型使用离散还是连续标记,以及使用 BERT 或 GPT 类似的变压器架构生成随机或固定光栅顺序的标记。我们的实证结果显示,尽管所有模型在验证损失方面都能有效扩展,但它们的评估性能——以 FID、GenEval 分数和视觉质量衡量——呈现不同趋势。基于连续标记的模型在视觉质量上明显优于使用离散标记的模型。此外,生成顺序和注意机制显著影响 GenEval 分数:随机顺序模型的 GenEval 分数明显优于光栅顺序模型。受这些发现启发,我们训练了Fluid,这是一个在连续标记上采用随机顺序的自回归模型。Fluid 10.5B 模型在 MS-COCO 30K 上实现了新的零样本 FID 最佳值为6.16,并在 GenEval 基准上获得了0.69的总体分数。我们希望我们的发现和结果能够鼓励未来努力进一步弥合视觉和语言模型之间的扩展差距。
English
Scaling up autoregressive models in vision has not proven as beneficial as in large language models. In this work, we investigate this scaling problem in the context of text-to-image generation, focusing on two critical factors: whether models use discrete or continuous tokens, and whether tokens are generated in a random or fixed raster order using BERT- or GPT-like transformer architectures. Our empirical results show that, while all models scale effectively in terms of validation loss, their evaluation performance -- measured by FID, GenEval score, and visual quality -- follows different trends. Models based on continuous tokens achieve significantly better visual quality than those using discrete tokens. Furthermore, the generation order and attention mechanisms significantly affect the GenEval score: random-order models achieve notably better GenEval scores compared to raster-order models. Inspired by these findings, we train Fluid, a random-order autoregressive model on continuous tokens. Fluid 10.5B model achieves a new state-of-the-art zero-shot FID of 6.16 on MS-COCO 30K, and 0.69 overall score on the GenEval benchmark. We hope our findings and results will encourage future efforts to further bridge the scaling gap between vision and language models.

Summary

AI-Generated Summary

PDF383November 16, 2024