ChatPaper.aiChatPaper

通过方向性解耦对齐抑制扩散强化学习中的偏好模式坍塌

Taming Preference Mode Collapse via Directional Decoupling Alignment in Diffusion Reinforcement Learning

December 30, 2025
作者: Chubin Chen, Sujie Hu, Jiashu Zhu, Meiqi Wu, Jintao Chen, Yanxun Li, Nisha Huang, Chengyu Fang, Jiahong Wu, Xiangxiang Chu, Xiu Li
cs.AI

摘要

近期研究表明,通过人类反馈强化学习实现文本到图像扩散模型与人类偏好的对齐已取得显著进展。然而,尽管现有方法在自动化奖励指标上获得高分,却常导致偏好模式坍塌——一种特定的奖励破解形式,表现为模型收敛于狭窄的高分输出区间(如单一化风格或普遍过曝的图像),严重削弱了生成多样性。本文首次系统界定并量化该现象,提出专用于测量PMC程度的创新基准DivGenBench。我们指出这种坍塌源于对奖励模型固有偏差的过度优化,并据此提出定向解耦对齐框架——通过方向性修正奖励信号来缓解PMC。具体而言,该方法首先在冻结模型状态下学习奖励模型嵌入空间内的方向校正,随后在优化过程中对奖励信号施加校正,防止模型坍缩至特定模式从而保持多样性。结合质量与多样性的定性分析和定量评估表明,D²-Align在实现人类偏好对齐方面具有优越性。
English
Recent studies have demonstrated significant progress in aligning text-to-image diffusion models with human preference via Reinforcement Learning from Human Feedback. However, while existing methods achieve high scores on automated reward metrics, they often lead to Preference Mode Collapse (PMC)-a specific form of reward hacking where models converge on narrow, high-scoring outputs (e.g., images with monolithic styles or pervasive overexposure), severely degrading generative diversity. In this work, we introduce and quantify this phenomenon, proposing DivGenBench, a novel benchmark designed to measure the extent of PMC. We posit that this collapse is driven by over-optimization along the reward model's inherent biases. Building on this analysis, we propose Directional Decoupling Alignment (D^2-Align), a novel framework that mitigates PMC by directionally correcting the reward signal. Specifically, our method first learns a directional correction within the reward model's embedding space while keeping the model frozen. This correction is then applied to the reward signal during the optimization process, preventing the model from collapsing into specific modes and thereby maintaining diversity. Our comprehensive evaluation, combining qualitative analysis with quantitative metrics for both quality and diversity, reveals that D^2-Align achieves superior alignment with human preference.
PDF142February 8, 2026