从新手到专家:基于分布收缩强化学习微调的高效技能掌握
From Prior to Pro: Efficient Skill Mastery via Distribution Contractive RL Finetuning
March 10, 2026
作者: Zhanyi Sun, Shuran Song
cs.AI
摘要
我们提出分布收缩强化学习(DICE-RL)框架,该框架将强化学习作为"分布收缩"算子来优化预训练的生成式机器人策略。DICE-RL通过在线反馈放大高成功率行为,将预训练的行为先验转化为高性能的"专业"策略。我们首先预训练具有广泛行为覆盖度的扩散或流模型策略,随后采用结合选择性行为正则化与价值引导动作选择的稳定、样本高效的残差离线策略强化学习框架进行微调。大量实验与分析表明,DICE-RL能以强大的稳定性和样本效率持续提升策略性能,在仿真和真实机器人场景中均可直接基于高维像素输入掌握复杂的长时程操作技能。项目网站:https://zhanyisun.github.io/dice.rl.2026/。
English
We introduce Distribution Contractive Reinforcement Learning (DICE-RL), a framework that uses reinforcement learning (RL) as a "distribution contraction" operator to refine pretrained generative robot policies. DICE-RL turns a pretrained behavior prior into a high-performing "pro" policy by amplifying high-success behaviors from online feedback. We pretrain a diffusion- or flow-based policy for broad behavioral coverage, then finetune it with a stable, sample-efficient residual off-policy RL framework that combines selective behavior regularization with value-guided action selection. Extensive experiments and analyses show that DICE-RL reliably improves performance with strong stability and sample efficiency. It enables mastery of complex long-horizon manipulation skills directly from high-dimensional pixel inputs, both in simulation and on a real robot. Project website: https://zhanyisun.github.io/dice.rl.2026/.