ChatPaper.aiChatPaper

TactAlign:通过触觉对齐实现从人类到机器人的策略迁移

TactAlign: Human-to-Robot Policy Transfer via Tactile Alignment

February 14, 2026
作者: Youngsun Wi, Jessica Yin, Elvis Xiang, Akash Sharma, Jitendra Malik, Mustafa Mukadam, Nima Fazeli, Tess Hellebrekers
cs.AI

摘要

通过可穿戴设备(如触觉手套)采集的人类演示数据为策略学习提供了快速灵巧的监督信号,这些数据源自丰富自然的触觉反馈。然而,核心挑战在于如何将人类采集的触觉信号迁移至机器人,以克服传感模式与具身形态的差异。现有结合触觉的人类到机器人迁移方法通常假设使用相同的触觉传感器、需要配对数据,且要求人类演示者与机器人间几乎不存在具身差异,这限制了方法的可扩展性与普适性。我们提出TactAlign——一种跨具身触觉对齐方法,可将人类采集的触觉信号迁移至不同具身形态的机器人。该方法通过修正流将人类与机器人的触觉观测映射至共享潜空间,且无需配对数据集、人工标注或特权信息。我们的方法通过手物交互衍生的伪配对样本实现低成本的潜空间迁移。实验表明,TactAlign在多个接触密集型任务(旋转、插入、盖合)中提升了人类到机器人的策略迁移效果,仅需不足5分钟的人类数据即可泛化至未见物体与任务,并能实现高灵巧任务(灯泡旋拧)的零样本人类到机器人迁移。
English
Human demonstrations collected by wearable devices (e.g., tactile gloves) provide fast and dexterous supervision for policy learning, and are guided by rich, natural tactile feedback. However, a key challenge is how to transfer human-collected tactile signals to robots despite the differences in sensing modalities and embodiment. Existing human-to-robot (H2R) approaches that incorporate touch often assume identical tactile sensors, require paired data, and involve little to no embodiment gap between human demonstrator and the robots, limiting scalability and generality. We propose TactAlign, a cross-embodiment tactile alignment method that transfers human-collected tactile signals to a robot with different embodiment. TactAlign transforms human and robot tactile observations into a shared latent representation using a rectified flow, without paired datasets, manual labels, or privileged information. Our method enables low-cost latent transport guided by hand-object interaction-derived pseudo-pairs. We demonstrate that TactAlign improves H2R policy transfer across multiple contact-rich tasks (pivoting, insertion, lid closing), generalizes to unseen objects and tasks with human data (less than 5 minutes), and enables zero-shot H2R transfer on a highly dexterous tasks (light bulb screwing).
PDF102February 21, 2026