人机交互中的具身指代表达理解
Embodied Referring Expression Comprehension in Human-Robot Interaction
December 6, 2025
作者: Md Mofijul Islam, Alexi Gladstone, Sujan Sarker, Ganesh Nanduru, Md Fahim, Keyan Du, Aman Chadha, Tariq Iqbal
cs.AI
摘要
随着机器人进入人类工作空间,亟需使其能够理解具身化的人类指令,从而实现直观流畅的人机交互。然而,由于缺乏能够捕捉多样化人机交互场景中自然具身互动的大规模数据集,精确理解仍面临挑战。现有数据集存在视角偏差、单视角采集、非语言手势覆盖不足以及主要关注室内环境等问题。为解决这些局限性,我们提出Refer360数据集——一个在室内外多视角环境下采集的大规模具身化语言与非语言交互数据集。此外,我们设计了一种多模态引导残差模块MuRes,旨在提升具身化指代表达理解能力。该模块通过构建信息瓶颈,提取显著的模态特异性信号并将其强化注入预训练表征,从而形成面向下游任务的互补特征。我们在四个包括Refer360在内的人机交互数据集上开展大量实验,结果表明当前多模态模型难以全面捕捉具身互动特征,但通过MuRes增强后性能获得持续提升。这些发现确立了Refer360作为重要基准数据的价值,同时展现出引导残差学习在推动人类环境中机器人的具身指代表达理解能力方面的潜力。
English
As robots enter human workspaces, there is a crucial need for them to comprehend embodied human instructions, enabling intuitive and fluent human-robot interaction (HRI). However, accurate comprehension is challenging due to a lack of large-scale datasets that capture natural embodied interactions in diverse HRI settings. Existing datasets suffer from perspective bias, single-view collection, inadequate coverage of nonverbal gestures, and a predominant focus on indoor environments. To address these issues, we present the Refer360 dataset, a large-scale dataset of embodied verbal and nonverbal interactions collected across diverse viewpoints in both indoor and outdoor settings. Additionally, we introduce MuRes, a multimodal guided residual module designed to improve embodied referring expression comprehension. MuRes acts as an information bottleneck, extracting salient modality-specific signals and reinforcing them into pre-trained representations to form complementary features for downstream tasks. We conduct extensive experiments on four HRI datasets, including the Refer360 dataset, and demonstrate that current multimodal models fail to capture embodied interactions comprehensively; however, augmenting them with MuRes consistently improves performance. These findings establish Refer360 as a valuable benchmark and exhibit the potential of guided residual learning to advance embodied referring expression comprehension in robots operating within human environments.