DiffusionBlocks:基于分数扩散的分块式生成模型训练方法
DiffusionBlocks: Blockwise Training for Generative Models via Score-Based Diffusion
June 17, 2025
作者: Makoto Shing, Takuya Akiba
cs.AI
摘要
采用端到端反向传播训练大型神经网络会带来显著的内存瓶颈,限制了前沿人工智能研究的普及。我们提出DiffusionBlocks,这是一种新颖的训练框架,将神经网络模块解释为在连续时间扩散过程中执行去噪操作。通过将网络划分为可独立训练的模块,并基于等累积概率质量优化噪声水平分配,我们的方法在生成任务中实现了显著的内存效率,同时保持了与传统反向传播相媲美的性能。在图像生成和语言建模任务上的实验表明,内存减少量与模块数量成正比,同时实现了更优的性能。DiffusionBlocks为在有限计算资源下普及大规模神经网络训练提供了一条充满前景的途径。
English
Training large neural networks with end-to-end backpropagation creates
significant memory bottlenecks, limiting accessibility to state-of-the-art AI
research. We propose DiffusionBlocks, a novel training framework
that interprets neural network blocks as performing denoising operations in a
continuous-time diffusion process. By partitioning the network into
independently trainable blocks and optimizing noise level assignments based on
equal cumulative probability mass, our approach achieves significant memory
efficiency while maintaining competitive performance compared to traditional
backpropagation in generative tasks. Experiments on image generation and
language modeling tasks demonstrate memory reduction proportional to the number
of blocks while achieving superior performance. DiffusionBlocks provides a
promising pathway for democratizing access to large-scale neural network
training with limited computational resources.