**Reg-DPO:基于GT配对的SFT正则化直接偏好优化及其在视频生成质量提升中的应用**
Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation
November 3, 2025
作者: Jie Du, Xinyu Gong, Qingshan Tan, Wen Li, Yangming Cheng, Weitao Wang, Chenlu Zhan, Suhui Wu, Hao Zhang, Jun Zhang
cs.AI
摘要
近期研究表明,直接偏好优化(DPO)是一种无需奖励函数即可有效提升视频生成质量的方法。然而现有方法大多沿袭图像领域的范式,且主要基于小规模模型(约20亿参数)开发,难以应对视频任务特有的挑战:高昂的数据构建成本、训练不稳定性及巨大内存消耗。为突破这些限制,我们提出GT-Pair方法,通过将真实视频作为正样本、模型生成视频作为负样本,自动构建高质量偏好对,无需任何外部标注。进一步提出Reg-DPO算法,将SFT损失作为正则化项融入DPO目标函数,有效增强训练稳定性和生成保真度。结合FSDP框架与多重内存优化技术,我们的方法实现了相较单独使用FSDP近三倍的训练容量提升。在多个数据集的图文生成视频和文本生成视频任务上的大量实验表明,本方法持续超越现有方案,展现出卓越的视频生成质量。
English
Recent studies have identified Direct Preference Optimization (DPO) as an
efficient and reward-free approach to improving video generation quality.
However, existing methods largely follow image-domain paradigms and are mainly
developed on small-scale models (approximately 2B parameters), limiting their
ability to address the unique challenges of video tasks, such as costly data
construction, unstable training, and heavy memory consumption. To overcome
these limitations, we introduce a GT-Pair that automatically builds
high-quality preference pairs by using real videos as positives and
model-generated videos as negatives, eliminating the need for any external
annotation. We further present Reg-DPO, which incorporates the SFT loss as a
regularization term into the DPO objective to enhance training stability and
generation fidelity. Additionally, by combining the FSDP framework with
multiple memory optimization techniques, our approach achieves nearly three
times higher training capacity than using FSDP alone. Extensive experiments on
both I2V and T2V tasks across multiple datasets demonstrate that our method
consistently outperforms existing approaches, delivering superior video
generation quality.