通过正交微调控制文本到图像的扩散
Controlling Text-to-Image Diffusion by Orthogonal Finetuning
June 12, 2023
作者: Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard Schölkopf
cs.AI
摘要
大型文本到图像扩散模型在从文本提示生成逼真图像方面具有令人印象深刻的能力。如何有效地引导或控制这些强大模型以执行不同的下游任务成为一个重要的开放问题。为了解决这一挑战,我们引入了一种原则性的微调方法——正交微调(OFT),用于使文本到图像扩散模型适应下游任务。与现有方法不同,OFT 可以明确地保留表征单位超球面上神经元成对关系的超球面能量。我们发现,这一特性对于保持文本到图像扩散模型的语义生成能力至关重要。为了提高微调稳定性,我们进一步提出了约束正交微调(COFT),它对超球面施加了额外的半径约束。具体而言,我们考虑了两个重要的微调文本到图像任务:主体驱动生成,目标是在给定主体的几幅图像和文本提示的情况下生成特定主体的图像,以及可控生成,目标是使模型接收额外的控制信号。我们凭经验证明,我们的OFT框架在生成质量和收敛速度方面优于现有方法。
English
Large text-to-image diffusion models have impressive capabilities in
generating photorealistic images from text prompts. How to effectively guide or
control these powerful models to perform different downstream tasks becomes an
important open problem. To tackle this challenge, we introduce a principled
finetuning method -- Orthogonal Finetuning (OFT), for adapting text-to-image
diffusion models to downstream tasks. Unlike existing methods, OFT can provably
preserve hyperspherical energy which characterizes the pairwise neuron
relationship on the unit hypersphere. We find that this property is crucial for
preserving the semantic generation ability of text-to-image diffusion models.
To improve finetuning stability, we further propose Constrained Orthogonal
Finetuning (COFT) which imposes an additional radius constraint to the
hypersphere. Specifically, we consider two important finetuning text-to-image
tasks: subject-driven generation where the goal is to generate subject-specific
images given a few images of a subject and a text prompt, and controllable
generation where the goal is to enable the model to take in additional control
signals. We empirically show that our OFT framework outperforms existing
methods in generation quality and convergence speed.