提升文本到图像扩散模型的长文本对齐
Improving Long-Text Alignment for Text-to-Image Diffusion Models
October 15, 2024
作者: Luping Liu, Chao Du, Tianyu Pang, Zehan Wang, Chongxuan Li, Dong Xu
cs.AI
摘要
文本到图像(T2I)扩散模型的快速发展使其能够从给定文本中生成前所未有的结果。然而,随着文本输入变得更长,现有的编码方法如CLIP面临限制,将生成的图像与长文本对齐变得具有挑战性。为了解决这些问题,我们提出了LongAlign,其中包括用于处理长文本的分段级编码方法和用于有效对齐训练的分解偏好优化方法。对于分段级编码,长文本被分成多个段落并分别处理。这种方法克服了预训练编码模型的最大输入长度限制。对于偏好优化,我们提供了基于CLIP的分解偏好模型来微调扩散模型。具体而言,为了利用基于CLIP的偏好模型进行T2I对齐,我们深入研究了它们的评分机制,并发现偏好分数可以分解为两个部分:一个衡量T2I对齐的文本相关部分和一个评估人类偏好的其他视觉方面的文本无关部分。此外,我们发现文本无关部分在微调过程中导致了常见的过拟合问题。为了解决这个问题,我们提出了一种重新加权策略,为这两个部分分配不同的权重,从而减少过拟合并增强对齐效果。使用我们的方法对512次512稳定扩散(SD)v1.5进行约20小时的微调后,微调后的SD在T2I对齐方面胜过了更强的基础模型,如PixArt-alpha和Kandinsky v2.2。代码可在https://github.com/luping-liu/LongAlign找到。
English
The rapid advancement of text-to-image (T2I) diffusion models has enabled
them to generate unprecedented results from given texts. However, as text
inputs become longer, existing encoding methods like CLIP face limitations, and
aligning the generated images with long texts becomes challenging. To tackle
these issues, we propose LongAlign, which includes a segment-level encoding
method for processing long texts and a decomposed preference optimization
method for effective alignment training. For segment-level encoding, long texts
are divided into multiple segments and processed separately. This method
overcomes the maximum input length limits of pretrained encoding models. For
preference optimization, we provide decomposed CLIP-based preference models to
fine-tune diffusion models. Specifically, to utilize CLIP-based preference
models for T2I alignment, we delve into their scoring mechanisms and find that
the preference scores can be decomposed into two components: a text-relevant
part that measures T2I alignment and a text-irrelevant part that assesses other
visual aspects of human preference. Additionally, we find that the
text-irrelevant part contributes to a common overfitting problem during
fine-tuning. To address this, we propose a reweighting strategy that assigns
different weights to these two components, thereby reducing overfitting and
enhancing alignment. After fine-tuning 512 times 512 Stable Diffusion (SD)
v1.5 for about 20 hours using our method, the fine-tuned SD outperforms
stronger foundation models in T2I alignment, such as PixArt-alpha and
Kandinsky v2.2. The code is available at
https://github.com/luping-liu/LongAlign.Summary
AI-Generated Summary