### 基于扩散策略的隐式奖励恢复
Recovering Hidden Reward in Diffusion-Based Policies
May 1, 2026
作者: Yanbiao Ji, Qiuchang Li, Yuting Hu, Shaokai Wu, Wenyuan Xie, Guodong Zhang, Qicheng He, Deyi Ji, Yue Ding, Hongtao Lu
cs.AI
摘要
本文提出EnergyFlow框架,该框架通过参数化标量能量函数(其梯度为去噪场),将生成式动作建模与逆强化学习相统一。我们证明在最大熵最优性条件下,通过去噪分数匹配学习得到的评分函数可还原专家软Q函数的梯度,从而无需对抗训练即可实现奖励提取。形式上,我们证明约束学习场为保守场可降低假设复杂度并收紧分布外泛化边界。我们进一步刻画了还原奖励的可辨识性,并界定了评分估计误差如何传播至动作偏好。实验表明,EnergyFlow在多种操作任务中实现了最先进的模仿性能,同时为下游强化学习提供了优于对抗式逆强化学习方法和基于似然替代方案的有效奖励信号。这些结果证明,有效奖励提取所需的结构约束同时可作为策略泛化的有益归纳偏置。代码详见https://github.com/sotaagi/EnergyFlow。
English
This paper introduces EnergyFlow, a framework that unifies generative action modeling with inverse reinforcement learning by parameterizing a scalar energy function whose gradient is the denoising field. We establish that under maximum-entropy optimality, the score function learned via denoising score matching recovers the gradient of the expert's soft Q-function, enabling reward extraction without adversarial training. Formally, we prove that constraining the learned field to be conservative reduces hypothesis complexity and tightens out-of-distribution generalization bounds. We further characterize the identifiability of recovered rewards and bound how score estimation errors propagate to action preferences. Empirically, EnergyFlow achieves state-of-the-art imitation performance on various manipulation tasks while providing an effective reward signal for downstream reinforcement learning that outperforms both adversarial IRL methods and likelihood-based alternatives. These results show that the structural constraints required for valid reward extraction simultaneously serve as beneficial inductive biases for policy generalization. The code is available at https://github.com/sotaagi/EnergyFlow.