ChatPaper.aiChatPaper

从去噪到精炼:视觉语言扩散模型的校正框架

From Denoising to Refining: A Corrective Framework for Vision-Language Diffusion Model

October 22, 2025
作者: Yatai Ji, Teng Wang, Yuying Ge, Zhiheng Liu, Sidi Yang, Ying Shan, Ping Luo
cs.AI

摘要

离散扩散模型已成为视觉语言任务的重要研究方向,其双向上下文建模能力与理论并行化优势令人瞩目。然而,训练与推断之间的差异严重制约了其实际应用:并行解码过程中初始令牌的错误会污染生成上下文,引发错误连锁反应,导致语法错误与语义幻觉。为攻克这一根本性挑战,我们将生成过程从被动去噪重构为主动精修。本文提出ReDiff——一种增强精修能力的扩散框架,使模型具备识别并修正自身错误的能力。该方案采用两阶段训练策略:首先通过修正合成错误训练模型掌握基础修订能力;随后引入创新的在线自校正循环,让模型通过专家校正样本学习对自身缺陷草稿的修订。这种错误驱动学习使模型获得关键的自省能力,能够对已生成内容进行迭代优化,从而有效阻断错误传播链。大量实验表明,ReDiff显著提升了生成内容的连贯性与事实准确性,实现了远超传统去噪方法的稳定高效并行生成。代码与模型已开源:https://rediff-hku.github.io/。
English
Discrete diffusion models have emerged as a promising direction for vision-language tasks, offering bidirectional context modeling and theoretical parallelization. However, their practical application is severely hindered by a train-inference discrepancy, which leads to catastrophic error cascades: initial token errors during parallel decoding pollute the generation context, triggering a chain reaction of compounding errors and leading to syntactic errors and semantic hallucinations. To address this fundamental challenge, we reframe the generation process from passive denoising to active refining. We introduce ReDiff, a refining-enhanced diffusion framework that teaches the model to identify and correct its own errors. Our approach features a two-stage training process: first, we instill a foundational revision capability by training the model to revise synthetic errors; second, we implement a novel online self-correction loop where the model is explicitly trained to revise its own flawed drafts by learning from an expert's corrections. This mistake-driven learning endows the model with the crucial ability to revisit and refine its already generated output, effectively breaking the error cascade. Extensive experiments demonstrate that ReDiff significantly improves the coherence and factual accuracy of generated content, enabling stable and efficient parallel generation far superior to traditional denoising methods. Our codes and models are available at https://rediff-hku.github.io/.
PDF292December 17, 2025