学习人形机器人末端执行器的开放词汇视觉运动控制
Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation
February 18, 2026
作者: Runpei Dong, Ziyan Li, Xialin He, Saurabh Gupta
cs.AI
摘要
人形机器人在开放环境中对任意物体进行视觉移动操作,需要精确的末端执行器控制能力以及通过视觉输入(如RGB-D图像)对场景的泛化理解能力。现有方法主要基于真实世界的模仿学习,由于大规模训练数据采集困难,其泛化能力存在局限。本文提出一种名为HERO的新范式,通过将大视觉模型的强泛化能力/开放词汇理解能力与仿真训练获得的精准控制性能相结合,实现人形机器人的物体移动操作。我们通过设计精准的残差感知末端执行器追踪策略实现这一目标——该策略融合了经典机器人学与机器学习方法:a)利用逆运动学将残差末端目标转换为参考轨迹;b)通过神经网络前向运动学模型实现精确运动学计算;c)目标调整机制;d)重规划功能。这些创新共同将末端执行器追踪误差降低至原水平的3.2倍。基于该精准追踪器,我们构建了模块化移动操作系统,利用开放词汇大视觉模型实现强大的视觉泛化能力。该系统可适应从办公室到咖啡店等多样真实环境,在43厘米至92厘米不同高度的台面上稳定操作各类日常物品(如马克杯、苹果、玩具等)。仿真与真实场景下的系统化模块测试及端到端实验验证了所提设计的有效性。我们相信本文的突破性进展将为训练人形机器人操作日常物品开辟新途径。
English
Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (e.g., RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (e.g., mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with daily objects.