定制你的策略!通过测试时分布级组合优化基于扩散或流的机器人策略
Compose Your Policies! Improving Diffusion-based or Flow-based Robot Policies via Test-time Distribution-level Composition
October 1, 2025
作者: Jiahang Cao, Yize Huang, Hanzhong Guo, Rui Zhang, Mu Nan, Weijian Mai, Jiaxu Wang, Hao Cheng, Jingkai Sun, Gang Han, Wen Zhao, Qiang Zhang, Yijie Guo, Qihao Zheng, Chunfeng Song, Xiao Li, Ping Luo, Andrew F. Luo
cs.AI
摘要
基于扩散模型的机器人控制方法,包括视觉-语言-动作(VLA)和视觉-动作(VA)策略,已展现出显著的能力。然而,其发展受到获取大规模交互数据集高成本的限制。本研究提出了一种无需额外模型训练即可提升策略性能的替代范式。令人惊讶的是,我们证明了组合策略能够超越任一父策略的性能。我们的贡献有三方面。首先,我们建立了理论基础,表明多个扩散模型分布评分的凸组合能够产生优于任何单一评分的一步功能目标。随后,利用Grönwall型界证明了这一单步改进能够贯穿整个生成轨迹,带来系统性性能提升。其次,基于这些结果,我们提出了通用策略组合(GPC),这是一种无需训练的方法,通过凸组合和测试时搜索结合多个预训练策略的分布评分来提升性能。GPC具有通用性,允许即插即用地组合异构策略,包括VA和VLA模型,以及基于扩散或流匹配的策略,无论其输入视觉模态如何。第三,我们提供了广泛的实证验证。在Robomimic、PushT和RoboTwin基准测试以及真实世界机器人评估中的实验证实,GPC在多样化任务中持续提升了性能和适应性。对替代组合算子和权重策略的进一步分析,为理解GPC成功的机制提供了洞见。这些结果确立了GPC作为一种简单而有效的方法,通过利用现有策略来提升控制性能。
English
Diffusion-based models for robotic control, including vision-language-action
(VLA) and vision-action (VA) policies, have demonstrated significant
capabilities. Yet their advancement is constrained by the high cost of
acquiring large-scale interaction datasets. This work introduces an alternative
paradigm for enhancing policy performance without additional model training.
Perhaps surprisingly, we demonstrate that the composed policies can exceed the
performance of either parent policy. Our contribution is threefold. First, we
establish a theoretical foundation showing that the convex composition of
distributional scores from multiple diffusion models can yield a superior
one-step functional objective compared to any individual score. A
Gr\"onwall-type bound is then used to show that this single-step improvement
propagates through entire generation trajectories, leading to systemic
performance gains. Second, motivated by these results, we propose General
Policy Composition (GPC), a training-free method that enhances performance by
combining the distributional scores of multiple pre-trained policies via a
convex combination and test-time search. GPC is versatile, allowing for the
plug-and-play composition of heterogeneous policies, including VA and VLA
models, as well as those based on diffusion or flow-matching, irrespective of
their input visual modalities. Third, we provide extensive empirical
validation. Experiments on Robomimic, PushT, and RoboTwin benchmarks, alongside
real-world robotic evaluations, confirm that GPC consistently improves
performance and adaptability across a diverse set of tasks. Further analysis of
alternative composition operators and weighting strategies offers insights into
the mechanisms underlying the success of GPC. These results establish GPC as a
simple yet effective method for improving control performance by leveraging
existing policies.