ChatPaper.aiChatPaper

扩散-RWKV:为扩散模型扩展RWKV类架构

Diffusion-RWKV: Scaling RWKV-Like Architectures for Diffusion Models

April 6, 2024
作者: Zhengcong Fei, Mingyuan Fan, Changqian Yu, Debang Li, Junshi Huang
cs.AI

摘要

Transformer已经在计算机视觉和自然语言处理(NLP)领域推动了进展。然而,巨大的计算复杂性限制了它们在长上下文任务中的应用,比如高分辨率图像生成。本文介绍了一系列从NLP中使用的RWKV模型改编而来的架构,针对应用于图像生成任务的扩散模型进行了必要的修改,称为Diffusion-RWKV。与具有Transformer的扩散类似,我们的模型旨在高效处理序列中的patchnified输入,并具有额外条件,同时能够有效地扩展,适应大规模参数和广泛数据集。其独特优势在于降低了空间聚合复杂性,使其在处理高分辨率图像时异常擅长,从而消除了窗口化或组缓存操作的必要性。对有条件和无条件图像生成任务的实验结果表明,Diffusion-RWKV在FID和IS指标上表现与现有的基于CNN或Transformer的扩散模型相媲美甚至超越,同时显著减少了总计算FLOP使用量。
English
Transformers have catalyzed advancements in computer vision and natural language processing (NLP) fields. However, substantial computational complexity poses limitations for their application in long-context tasks, such as high-resolution image generation. This paper introduces a series of architectures adapted from the RWKV model used in the NLP, with requisite modifications tailored for diffusion model applied to image generation tasks, referred to as Diffusion-RWKV. Similar to the diffusion with Transformers, our model is designed to efficiently handle patchnified inputs in a sequence with extra conditions, while also scaling up effectively, accommodating both large-scale parameters and extensive datasets. Its distinctive advantage manifests in its reduced spatial aggregation complexity, rendering it exceptionally adept at processing high-resolution images, thereby eliminating the necessity for windowing or group cached operations. Experimental results on both condition and unconditional image generation tasks demonstrate that Diffison-RWKV achieves performance on par with or surpasses existing CNN or Transformer-based diffusion models in FID and IS metrics while significantly reducing total computation FLOP usage.

Summary

AI-Generated Summary

PDF130December 15, 2024