ChatPaper.aiChatPaper

DiffusionBlocks:基於分數擴散的生成模型分塊訓練方法

DiffusionBlocks: Blockwise Training for Generative Models via Score-Based Diffusion

June 17, 2025
作者: Makoto Shing, Takuya Akiba
cs.AI

摘要

使用端到端反向傳播訓練大型神經網絡會造成顯著的記憶體瓶頸,限制了對尖端人工智慧研究的可及性。我們提出了DiffusionBlocks,這是一種新穎的訓練框架,將神經網絡區塊解釋為在連續時間擴散過程中執行去噪操作。通過將網絡劃分為可獨立訓練的區塊,並基於等累積概率質量優化噪聲水平分配,我們的方法在生成任務中實現了顯著的記憶體效率,同時保持了與傳統反向傳播相當的競爭性能。在圖像生成和語言建模任務上的實驗表明,記憶體減少與區塊數量成正比,同時實現了更優異的性能。DiffusionBlocks為在有限計算資源下普及大規模神經網絡訓練提供了一條有前景的途徑。
English
Training large neural networks with end-to-end backpropagation creates significant memory bottlenecks, limiting accessibility to state-of-the-art AI research. We propose DiffusionBlocks, a novel training framework that interprets neural network blocks as performing denoising operations in a continuous-time diffusion process. By partitioning the network into independently trainable blocks and optimizing noise level assignments based on equal cumulative probability mass, our approach achieves significant memory efficiency while maintaining competitive performance compared to traditional backpropagation in generative tasks. Experiments on image generation and language modeling tasks demonstrate memory reduction proportional to the number of blocks while achieving superior performance. DiffusionBlocks provides a promising pathway for democratizing access to large-scale neural network training with limited computational resources.
PDF22June 18, 2025