ChatPaper.aiChatPaper

通過語言校正提煉和檢索機器人操作的可概括知識

Distilling and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections

November 17, 2023
作者: Lihan Zha, Yuchen Cui, Li-Heng Lin, Minae Kwon, Montserrat Gonzalez Arenas, Andy Zeng, Fei Xia, Dorsa Sadigh
cs.AI

摘要

當今的機器人政策在面對對新環境的泛化挑戰時表現不佳。人類的糾正反饋是一種至關重要的指導形式,以實現這種泛化。然而,適應並從線上人類糾正中學習是一項非常困難的工作:機器人不僅需要隨著時間記住人類的反饋,以便在新環境中檢索正確的信息並降低干預率,還需要能夠對可能是關於高層人類偏好的任意糾正或是有關技能參數的低層調整做出反應。在這項工作中,我們提出了基於大型語言模型(LLM)的線上糾正蒸餾和檢索(DROC)系統,該系統能夠回應任意形式的語言反饋,從糾正中提煉出可泛化的知識,並根據文本和視覺相似性檢索相關的過去經驗,以提高在新環境中的表現。DROC 能夠回應一系列線上語言糾正,解決高層任務計劃和低層技能基元中的失敗。我們展示了 DROC 能夠有效地從一系列線上糾正中提煉出相關信息,並在具有新任務或物件實例的環境中檢索該知識。DROC 通過僅使用一半所需的糾正數量在第一輪中表現優於其他直接通過LLM生成機器人代碼的技術,並在兩次迭代後幾乎不需要進行任何糾正。我們在 https://sites.google.com/stanford.edu/droc 上展示了更多結果、視頻、提示和代碼。
English
Today's robot policies exhibit subpar performance when faced with the challenge of generalizing to novel environments. Human corrective feedback is a crucial form of guidance to enable such generalization. However, adapting to and learning from online human corrections is a non-trivial endeavor: not only do robots need to remember human feedback over time to retrieve the right information in new settings and reduce the intervention rate, but also they would need to be able to respond to feedback that can be arbitrary corrections about high-level human preferences to low-level adjustments to skill parameters. In this work, we present Distillation and Retrieval of Online Corrections (DROC), a large language model (LLM)-based system that can respond to arbitrary forms of language feedback, distill generalizable knowledge from corrections, and retrieve relevant past experiences based on textual and visual similarity for improving performance in novel settings. DROC is able to respond to a sequence of online language corrections that address failures in both high-level task plans and low-level skill primitives. We demonstrate that DROC effectively distills the relevant information from the sequence of online corrections in a knowledge base and retrieves that knowledge in settings with new task or object instances. DROC outperforms other techniques that directly generate robot code via LLMs by using only half of the total number of corrections needed in the first round and requires little to no corrections after two iterations. We show further results, videos, prompts and code on https://sites.google.com/stanford.edu/droc .
PDF80December 15, 2024