使用去噪神经权重进行高效训练
Efficient Training with Denoised Neural Weights
July 16, 2024
作者: Yifan Gong, Zheng Zhan, Yanyu Li, Yerlan Idelbayev, Andrey Zharkov, Kfir Aberman, Sergey Tulyakov, Yanzhi Wang, Jian Ren
cs.AI
摘要
良好的权重初始化是减少深度神经网络(DNN)模型训练成本的有效措施。如何初始化参数的选择具有挑战性,可能需要手动调整,这可能耗时且容易出现人为错误。为了克服这些限制,本研究采取了一种新颖的方法,构建了一个权重生成器来合成神经网络的初始化权重。我们以生成对抗网络(GANs)在图像到图像翻译任务中的应用为例,因为容易收集涵盖广泛范围的模型权重。具体而言,我们首先收集包含各种图像编辑概念及其对应训练权重的数据集,后续用于训练权重生成器。为了解决各层之间的不同特征和需要预测的大量权重,我们将权重分成大小相等的块,并为每个块分配一个索引。随后,使用该数据集训练扩散模型,同时利用概念的文本条件和块索引。通过使用我们的扩散模型预测的去噪权重初始化图像翻译模型,训练仅需43.3秒。与从头开始训练(即Pix2pix)相比,我们在获得更好的图像生成质量的同时,为新概念实现了15倍的训练时间加速。
English
Good weight initialization serves as an effective measure to reduce the
training cost of a deep neural network (DNN) model. The choice of how to
initialize parameters is challenging and may require manual tuning, which can
be time-consuming and prone to human error. To overcome such limitations, this
work takes a novel step towards building a weight generator to synthesize the
neural weights for initialization. We use the image-to-image translation task
with generative adversarial networks (GANs) as an example due to the ease of
collecting model weights spanning a wide range. Specifically, we first collect
a dataset with various image editing concepts and their corresponding trained
weights, which are later used for the training of the weight generator. To
address the different characteristics among layers and the substantial number
of weights to be predicted, we divide the weights into equal-sized blocks and
assign each block an index. Subsequently, a diffusion model is trained with
such a dataset using both text conditions of the concept and the block indexes.
By initializing the image translation model with the denoised weights predicted
by our diffusion model, the training requires only 43.3 seconds. Compared to
training from scratch (i.e., Pix2pix), we achieve a 15x training time
acceleration for a new concept while obtaining even better image generation
quality.Summary
AI-Generated Summary