步骤感知偏好优化:在每个步骤上将偏好与去噪性能对齐。
Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step
June 6, 2024
作者: Zhanhao Liang, Yuhui Yuan, Shuyang Gu, Bohan Chen, Tiankai Hang, Ji Li, Liang Zheng
cs.AI
摘要
最近,直接偏好优化(DPO)已经将其成功从对齐大型语言模型(LLMs)扩展到将文本到图像扩散模型与人类偏好对齐。与大多数现有的DPO方法不同,这些方法假设所有扩散步骤与最终生成的图像共享一致的偏好顺序,我们认为这种假设忽视了特定步骤的去噪性能,应该为每个步骤的贡献量定制偏好标签。为了解决这一限制,我们提出了一种新颖的后训练方法,即步骤感知偏好优化(SPO),该方法独立评估和调整每个步骤的去噪性能,使用步骤感知偏好模型和逐步重采样器来确保准确的步骤感知监督。具体而言,在每个去噪步骤中,我们对图像池进行抽样,找到合适的胜负对,并且最重要的是,从图像池中随机选择一幅图像来初始化下一个去噪步骤。这种逐步重采样器过程确保下一个胜负图像对来自同一图像,使胜负比较独立于上一步。为了评估每个步骤的偏好,我们训练了一个单独的步骤感知偏好模型,可应用于嘈杂和清晰图像。我们使用Stable Diffusion v1.5和SDXL进行的实验表明,SPO在将生成的图像与复杂详细提示对齐并增强美学方面明显优于最新的Diffusion-DPO,同时在训练效率方面实现了超过20倍的提升。代码和模型:https://rockeycoss.github.io/spo.github.io/
English
Recently, Direct Preference Optimization (DPO) has extended its success from
aligning large language models (LLMs) to aligning text-to-image diffusion
models with human preferences. Unlike most existing DPO methods that assume all
diffusion steps share a consistent preference order with the final generated
images, we argue that this assumption neglects step-specific denoising
performance and that preference labels should be tailored to each step's
contribution. To address this limitation, we propose Step-aware Preference
Optimization (SPO), a novel post-training approach that independently evaluates
and adjusts the denoising performance at each step, using a step-aware
preference model and a step-wise resampler to ensure accurate step-aware
supervision. Specifically, at each denoising step, we sample a pool of images,
find a suitable win-lose pair, and, most importantly, randomly select a single
image from the pool to initialize the next denoising step. This step-wise
resampler process ensures the next win-lose image pair comes from the same
image, making the win-lose comparison independent of the previous step. To
assess the preferences at each step, we train a separate step-aware preference
model that can be applied to both noisy and clean images. Our experiments with
Stable Diffusion v1.5 and SDXL demonstrate that SPO significantly outperforms
the latest Diffusion-DPO in aligning generated images with complex, detailed
prompts and enhancing aesthetics, while also achieving more than 20x times
faster in training efficiency. Code and model:
https://rockeycoss.github.io/spo.github.io/Summary
AI-Generated Summary