ChatPaper.aiChatPaper

具身任務規劃與大型語言模型

Embodied Task Planning with Large Language Models

July 4, 2023
作者: Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan
cs.AI

摘要

在一般環境中,為具體代理裝備常識是使機器人成功完成複雜人類指令的重要因素。最近的大型語言模型(LLM)可以為代理在複雜任務的計劃生成中嵌入豐富的語義知識,但它們缺乏有關現實世界的信息,通常會導致不可行的行動序列。在本文中,我們提出了一種用於具體任務的任務計劃代理(TaPA),以物理場景約束進行基於視覺感知模型的計劃生成,代理根據場景中現有的物體來生成可執行計劃,並將LLM與視覺感知模型進行對齊。具體來說,我們首先構建了一個多模態數據集,其中包含室內場景、指令和行動計劃的三元組,我們為GPT-3.5提供了設計好的提示以及場景中現有物體的列表,以生成大量指令和相應的計劃。生成的數據被用於對預先訓練的LLM進行基於實際情況的計劃調整。在推理過程中,我們通過將開放詞彙對象檢測器擴展到在不同可達位置收集的多視圖RGB圖像中,來發現場景中的物體。實驗結果表明,我們的TaPA框架生成的計劃成功率比LLaVA和GPT-3.5高出相當大的幅度,這表明了在一般和複雜環境中具體任務計劃的實用性。
English
Equipping embodied agents with commonsense is important for robots to successfully complete complex human instructions in general environments. Recent large language models (LLM) can embed rich semantic knowledge for agents in plan generation of complex tasks, while they lack the information about the realistic world and usually yield infeasible action sequences. In this paper, we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning with physical scene constraint, where the agent generates executable plans according to the existed objects in the scene by aligning LLMs with the visual perception models. Specifically, we first construct a multimodal dataset containing triplets of indoor scenes, instructions and action plans, where we provide the designed prompts and the list of existing objects in the scene for GPT-3.5 to generate a large number of instructions and corresponding planned actions. The generated data is leveraged for grounded plan tuning of pre-trained LLMs. During inference, we discover the objects in the scene by extending open-vocabulary object detectors to multi-view RGB images collected in different achievable locations. Experimental results show that the generated plan from our TaPA framework can achieve higher success rate than LLaVA and GPT-3.5 by a sizable margin, which indicates the practicality of embodied task planning in general and complex environments.
PDF50December 15, 2024