扩散强化学习中通过定向解耦对齐抑制偏好模式崩溃
Taming Preference Mode Collapse via Directional Decoupling Alignment in Diffusion Reinforcement Learning
December 30, 2025
作者: Chubin Chen, Sujie Hu, Jiashu Zhu, Meiqi Wu, Jintao Chen, Yanxun Li, Nisha Huang, Chengyu Fang, Jiahong Wu, Xiangxiang Chu, Xiu Li
cs.AI
摘要
近期研究表明,通过基于人类反馈的强化学习技术,文本到图像扩散模型与人类偏好的对齐已取得显著进展。然而,现有方法虽在自动化奖励指标上获得高分,却常导致偏好模式坍缩——这是一种特殊的奖励破解现象,表现为模型收敛于狭窄的高分输出区间(如单一化风格或普遍过曝的图像),严重削弱了生成多样性。本文首次系统界定并量化该现象,提出专用于测量PMC程度的创新基准DivGenBench。我们推断这种坍缩源于对奖励模型固有偏差的过度优化。基于此分析,我们提出方向解耦对齐框架,通过定向修正奖励信号来缓解PMC。具体而言,该方法先在冻结的奖励模型嵌入空间中学习方向性校正,随后在优化过程中将此校正应用于奖励信号,从而防止模型坍缩至特定模式以保持多样性。结合质量与多样性的定性分析和定量评估表明,D^2-Align在人类偏好对齐方面实现了更优性能。
English
Recent studies have demonstrated significant progress in aligning text-to-image diffusion models with human preference via Reinforcement Learning from Human Feedback. However, while existing methods achieve high scores on automated reward metrics, they often lead to Preference Mode Collapse (PMC)-a specific form of reward hacking where models converge on narrow, high-scoring outputs (e.g., images with monolithic styles or pervasive overexposure), severely degrading generative diversity. In this work, we introduce and quantify this phenomenon, proposing DivGenBench, a novel benchmark designed to measure the extent of PMC. We posit that this collapse is driven by over-optimization along the reward model's inherent biases. Building on this analysis, we propose Directional Decoupling Alignment (D^2-Align), a novel framework that mitigates PMC by directionally correcting the reward signal. Specifically, our method first learns a directional correction within the reward model's embedding space while keeping the model frozen. This correction is then applied to the reward signal during the optimization process, preventing the model from collapsing into specific modes and thereby maintaining diversity. Our comprehensive evaluation, combining qualitative analysis with quantitative metrics for both quality and diversity, reveals that D^2-Align achieves superior alignment with human preference.