ChatPaper.aiChatPaper

擴散-RWKV:將RWKV類型的架構擴展至擴散模型

Diffusion-RWKV: Scaling RWKV-Like Architectures for Diffusion Models

April 6, 2024
作者: Zhengcong Fei, Mingyuan Fan, Changqian Yu, Debang Li, Junshi Huang
cs.AI

摘要

Transformer已經催生了在計算機視覺和自然語言處理(NLP)領域的進展。然而,龐大的計算複雜度限制了它們在長上下文任務中的應用,比如高分辨率圖像生成。本文介紹了一系列從NLP中使用的RWKV模型改編而來的架構,並對擴散模型應用於圖像生成任務進行了必要的修改,稱為Diffusion-RWKV。與具有Transformer的擴散類似,我們的模型被設計為有效處理帶有額外條件的序列化的patchnified輸入,同時也能夠有效擴展,適應大規模參數和廣泛數據集。它的獨特優勢體現在其降低的空間聚合複雜度上,使其在處理高分辨率圖像方面非常擅長,從而消除了窗口化或組緩存操作的必要性。對於有條件和無條件的圖像生成任務的實驗結果表明,Diffison-RWKV在FID和IS指標上實現了與現有CNN或基於Transformer的擴散模型相當或超越的性能,同時顯著減少了總計算FLOP使用量。
English
Transformers have catalyzed advancements in computer vision and natural language processing (NLP) fields. However, substantial computational complexity poses limitations for their application in long-context tasks, such as high-resolution image generation. This paper introduces a series of architectures adapted from the RWKV model used in the NLP, with requisite modifications tailored for diffusion model applied to image generation tasks, referred to as Diffusion-RWKV. Similar to the diffusion with Transformers, our model is designed to efficiently handle patchnified inputs in a sequence with extra conditions, while also scaling up effectively, accommodating both large-scale parameters and extensive datasets. Its distinctive advantage manifests in its reduced spatial aggregation complexity, rendering it exceptionally adept at processing high-resolution images, thereby eliminating the necessity for windowing or group cached operations. Experimental results on both condition and unconditional image generation tasks demonstrate that Diffison-RWKV achieves performance on par with or surpasses existing CNN or Transformer-based diffusion models in FID and IS metrics while significantly reducing total computation FLOP usage.

Summary

AI-Generated Summary

PDF130December 15, 2024