ChatPaper.aiChatPaper

神经网络扩散

Neural Network Diffusion

February 20, 2024
作者: Kai Wang, Zhaopan Xu, Yukun Zhou, Zelin Zang, Trevor Darrell, Zhuang Liu, Yang You
cs.AI

摘要

扩散模型在图像和视频生成方面取得了显著成功。在这项工作中,我们展示了扩散模型也能生成性能优异的神经网络参数。我们的方法很简单,利用了自动编码器和标准的潜在扩散模型。自动编码器提取了部分经过训练的网络参数的潜在表示。然后训练扩散模型来从随机噪声中合成这些潜在参数表示。它生成新的表示,通过自动编码器的解码器,其输出可作为新的网络参数子集。在各种架构和数据集上,我们的扩散过程始终生成性能相当或优于经过训练的网络的模型,且附加成本最小。值得注意的是,我们在实证中发现生成的模型与经过训练的网络表现不同。我们的结果鼓励更多探索扩散模型的多样化应用。
English
Diffusion models have achieved remarkable success in image and video generation. In this work, we demonstrate that diffusion models can also generate high-performing neural network parameters. Our approach is simple, utilizing an autoencoder and a standard latent diffusion model. The autoencoder extracts latent representations of a subset of the trained network parameters. A diffusion model is then trained to synthesize these latent parameter representations from random noise. It then generates new representations that are passed through the autoencoder's decoder, whose outputs are ready to use as new subsets of network parameters. Across various architectures and datasets, our diffusion process consistently generates models of comparable or improved performance over trained networks, with minimal additional cost. Notably, we empirically find that the generated models perform differently with the trained networks. Our results encourage more exploration on the versatile use of diffusion models.

Summary

AI-Generated Summary

PDF9810December 15, 2024