用于扩散模型模块化定制的正交适应化
Orthogonal Adaptation for Modular Customization of Diffusion Models
December 5, 2023
作者: Ryan Po, Guandao Yang, Kfir Aberman, Gordon Wetzstein
cs.AI
摘要
文本到图像模型的定制技术为以前无法实现的广泛应用铺平了道路,实现了在不同背景和风格下生成特定概念的可能。尽管现有方法可以为个别概念或有限预定义集合提供高保真度的定制,但它们在实现可扩展性方面存在不足,即单个模型能够无缝渲染无数概念。本文针对一个名为模块化定制的新问题,旨在有效地合并为个别概念独立微调的定制模型。这使得合并模型能够共同合成一幅图像中的概念,而不会影响保真度或增加额外的计算成本。
为解决这一问题,我们引入正交适应方法,旨在鼓励在微调期间互不访问的定制模型具有正交残差权重。这确保在推断时,定制模型可以最小干扰地相加。
我们提出的方法既简单又多才多艺,适用于模型架构中几乎所有可优化的权重。通过一系列定量和定性评估,我们的方法始终在效率和身份保留方面优于相关基线,展示了朝着扩展扩散模型定制化的显著进步。
English
Customization techniques for text-to-image models have paved the way for a
wide range of previously unattainable applications, enabling the generation of
specific concepts across diverse contexts and styles. While existing methods
facilitate high-fidelity customization for individual concepts or a limited,
pre-defined set of them, they fall short of achieving scalability, where a
single model can seamlessly render countless concepts. In this paper, we
address a new problem called Modular Customization, with the goal of
efficiently merging customized models that were fine-tuned independently for
individual concepts. This allows the merged model to jointly synthesize
concepts in one image without compromising fidelity or incurring any additional
computational costs.
To address this problem, we introduce Orthogonal Adaptation, a method
designed to encourage the customized models, which do not have access to each
other during fine-tuning, to have orthogonal residual weights. This ensures
that during inference time, the customized models can be summed with minimal
interference.
Our proposed method is both simple and versatile, applicable to nearly all
optimizable weights in the model architecture. Through an extensive set of
quantitative and qualitative evaluations, our method consistently outperforms
relevant baselines in terms of efficiency and identity preservation,
demonstrating a significant leap toward scalable customization of diffusion
models.