ChatPaper.aiChatPaper

TAIHRI:面向近距离人机交互的任务感知三维人体关键点定位

TAIHRI: Task-Aware 3D Human Keypoints Localization for Close-Range Human-Robot Interaction

April 10, 2026
作者: Ao Li, Yonggen Ling, Yiyang Lin, Yuji Wang, Yong Deng, Yansong Tang
cs.AI

摘要

精确的三维人体关键点定位是实现机器人与用户自然安全物理交互的关键技术。传统三维人体关键点估计方法主要关注相对于根节点的整体重建质量。然而在实际人机交互场景中,机器人更需关注以自我中心相机三维坐标系下任务相关身体部位的精确度量级空间定位。我们提出TAIHRI——首个专为近距离人机交互感知设计的视觉语言模型,能够理解用户动作指令并将机器人注意力引导至最具任务相关性的关键点。通过将三维关键点量化为有限交互空间,TAIHRI借助下一词元预测进行二维关键点推理,精确标定关键身体部位的三维空间坐标,并可无缝适配自然语言控制或全局空间人体网格重建等下游任务。在自我中心交互基准测试上的实验表明,TAIHRI对任务关键身体部位实现了卓越的估计精度。我们相信TAIHRI为具身人机交互领域开辟了新的研究路径。代码已开源于:https://github.com/Tencent/TAIHRI。
English
Accurate 3D human keypoints localization is a critical technology enabling robots to achieve natural and safe physical interaction with users. Conventional 3D human keypoints estimation methods primarily focus on the whole-body reconstruction quality relative to the root joint. However, in practical human-robot interaction (HRI) scenarios, robots are more concerned with the precise metric-scale spatial localization of task-relevant body parts under the egocentric camera 3D coordinate. We propose TAIHRI, the first Vision-Language Model (VLM) tailored for close-range HRI perception, capable of understanding users' motion commands and directing the robot's attention to the most task-relevant keypoints. By quantizing 3D keypoints into a finite interaction space, TAIHRI precisely localize the 3D spatial coordinates of critical body parts by 2D keypoint reasoning via next token prediction, and seamlessly adapt to downstream tasks such as natural language control or global space human mesh recovery. Experiments on egocentric interaction benchmarks demonstrate that TAIHRI achieves superior estimation accuracy for task-critical body parts. We believe TAIHRI opens new research avenues in the field of embodied human-robot interaction. Code is available at: https://github.com/Tencent/TAIHRI.
PDF21April 15, 2026