蒸餾特徵場實現少量樣本語言引導操作
Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation
July 27, 2023
作者: William Shen, Ge Yang, Alan Yu, Jansen Wong, Leslie Pack Kaelbling, Phillip Isola
cs.AI
摘要
自監督與語言監督影像模型蘊含豐富的世界知識,這對泛化能力至關重要。然而許多機器人任務需要對三維幾何有細緻理解,而這正是二維影像特徵通常欠缺的。本研究透過運用蒸餾特徵場,將精確的三維幾何與二維基礎模型的豐富語義相結合,從而為機器人操作彌合二維到三維的鴻溝。我們提出一種適用於六自由度抓取與放置的小樣本學習方法,該方法利用這些強大的空間與語義先驗知識,實現對未見過物體的實境泛化能力。透過從視覺語言模型CLIP蒸餾特徵,我們提出一種以自由文本自然語言指定新物體進行操作的方法,並展示其對未見過表達方式與新物體類別的泛化能力。
English
Self-supervised and language-supervised image models contain rich knowledge
of the world that is important for generalization. Many robotic tasks, however,
require a detailed understanding of 3D geometry, which is often lacking in 2D
image features. This work bridges this 2D-to-3D gap for robotic manipulation by
leveraging distilled feature fields to combine accurate 3D geometry with rich
semantics from 2D foundation models. We present a few-shot learning method for
6-DOF grasping and placing that harnesses these strong spatial and semantic
priors to achieve in-the-wild generalization to unseen objects. Using features
distilled from a vision-language model, CLIP, we present a way to designate
novel objects for manipulation via free-text natural language, and demonstrate
its ability to generalize to unseen expressions and novel categories of
objects.