ChatPaper.aiChatPaper

精炼特征场使得少样本语言引导操作成为可能。

Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation

July 27, 2023
作者: William Shen, Ge Yang, Alan Yu, Jansen Wong, Leslie Pack Kaelbling, Phillip Isola
cs.AI

摘要

自监督和语言监督的图像模型包含丰富的世界知识,对于泛化至关重要。然而,许多机器人任务需要对3D几何有详细的理解,而这在2D图像特征中通常缺乏。本研究通过利用提取的特征场,将准确的3D几何与来自2D基础模型的丰富语义相结合,以弥合机器人操作中的2D到3D差距。我们提出了一种用于6自由度抓取和放置的少样本学习方法,利用这些强大的空间和语义先验知识,实现对未见物体的野外泛化。利用从视觉语言模型CLIP中提取的特征,我们提出了一种通过自由文本自然语言指定新颖物体进行操作的方法,并展示了其泛化到未见表达和新类别物体的能力。
English
Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization. Many robotic tasks, however, require a detailed understanding of 3D geometry, which is often lacking in 2D image features. This work bridges this 2D-to-3D gap for robotic manipulation by leveraging distilled feature fields to combine accurate 3D geometry with rich semantics from 2D foundation models. We present a few-shot learning method for 6-DOF grasping and placing that harnesses these strong spatial and semantic priors to achieve in-the-wild generalization to unseen objects. Using features distilled from a vision-language model, CLIP, we present a way to designate novel objects for manipulation via free-text natural language, and demonstrate its ability to generalize to unseen expressions and novel categories of objects.
PDF80December 15, 2024