实体化R1:面向通用机器人操作的强化实体推理
Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation
August 19, 2025
作者: Yifu Yuan, Haiqin Cui, Yaoting Huang, Yibin Chen, Fei Ni, Zibin Dong, Pengyi Li, Yan Zheng, Jianye Hao
cs.AI
摘要
在具身人工智能中,泛化能力受到“视知行动鸿沟”的制约,这一鸿沟源于数据稀缺性与具身形态的多样性。为解决此问题,我们开创性地将“指向”作为一种统一、与具身形态无关的中间表示,定义了四项核心的具身指向能力,以此连接高层次的视觉语言理解与低层次的动作基元。我们推出了Embodied-R1,一个专为具身推理与指向设计的30亿参数视觉语言模型(VLM)。通过整合广泛的具身及通用视觉推理数据集,我们构建了大规模数据集Embodied-Points-200K,该数据集支持关键的具身指向能力。随后,我们采用两阶段强化微调(RFT)课程,配合专门的多任务奖励设计,对Embodied-R1进行训练。Embodied-R1在11项具身空间与指向基准测试中达到了业界领先水平。尤为重要的是,它展现了强大的零样本泛化能力,在SIMPLEREnv环境中取得了56.2%的成功率,并在无需任务特定微调的情况下,在8项真实世界XArm任务中平均达到87.5%的成功率,相较于强劲基线提升了62%。此外,该模型在面对多种视觉干扰时表现出极高的鲁棒性。我们的研究表明,以指向为核心的表示方法,结合RFT训练范式,为缩小机器人领域的感知-行动差距提供了一条有效且可泛化的途径。
English
Generalization in embodied AI is hindered by the "seeing-to-doing gap," which
stems from data scarcity and embodiment heterogeneity. To address this, we
pioneer "pointing" as a unified, embodiment-agnostic intermediate
representation, defining four core embodied pointing abilities that bridge
high-level vision-language comprehension with low-level action primitives. We
introduce Embodied-R1, a 3B Vision-Language Model (VLM) specifically designed
for embodied reasoning and pointing. We use a wide range of embodied and
general visual reasoning datasets as sources to construct a large-scale
dataset, Embodied-Points-200K, which supports key embodied pointing
capabilities. We then train Embodied-R1 using a two-stage Reinforced
Fine-tuning (RFT) curriculum with a specialized multi-task reward design.
Embodied-R1 achieves state-of-the-art performance on 11 embodied spatial and
pointing benchmarks. Critically, it demonstrates robust zero-shot
generalization by achieving a 56.2% success rate in the SIMPLEREnv and 87.5%
across 8 real-world XArm tasks without any task-specific fine-tuning,
representing a 62% improvement over strong baselines. Furthermore, the model
exhibits high robustness against diverse visual disturbances. Our work shows
that a pointing-centric representation, combined with an RFT training paradigm,
offers an effective and generalizable pathway to closing the perception-action
gap in robotics.