ChatPaper.aiChatPaper

基于大型语言模型的具身任务规划

Embodied Task Planning with Large Language Models

July 4, 2023
作者: Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan
cs.AI

摘要

为了使机器人能够成功完成一般环境中的复杂人类指令,为具身体的代理装备常识是至关重要的。最近的大型语言模型(LLM)可以为代理嵌入丰富的语义知识,用于生成复杂任务的计划,但它们缺乏关于现实世界的信息,通常会产生不可行的行动序列。在本文中,我们提出了一个用于具身体任务的任务规划代理(TaPA),用于基于物理场景约束进行接地规划,代理根据场景中存在的对象生成可执行计划,通过将LLM与视觉感知模型对齐。具体来说,我们首先构建了一个包含室内场景、指令和行动计划三元组的多模态数据集,我们为GPT-3.5提供了设计好的提示和场景中现有对象的列表,以生成大量指令和相应的计划行动。生成的数据用于对预训练的LLM进行接地计划调整。在推断过程中,我们通过将开放词汇对象检测器扩展到在不同可达位置收集的多视角RGB图像,来发现场景中的对象。实验结果表明,我们的TaPA框架生成的计划成功率比LLaVA和GPT-3.5高出相当大的幅度,这表明了在一般和复杂环境中进行具身体任务规划的实用性。
English
Equipping embodied agents with commonsense is important for robots to successfully complete complex human instructions in general environments. Recent large language models (LLM) can embed rich semantic knowledge for agents in plan generation of complex tasks, while they lack the information about the realistic world and usually yield infeasible action sequences. In this paper, we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning with physical scene constraint, where the agent generates executable plans according to the existed objects in the scene by aligning LLMs with the visual perception models. Specifically, we first construct a multimodal dataset containing triplets of indoor scenes, instructions and action plans, where we provide the designed prompts and the list of existing objects in the scene for GPT-3.5 to generate a large number of instructions and corresponding planned actions. The generated data is leveraged for grounded plan tuning of pre-trained LLMs. During inference, we discover the objects in the scene by extending open-vocabulary object detectors to multi-view RGB images collected in different achievable locations. Experimental results show that the generated plan from our TaPA framework can achieve higher success rate than LLaVA and GPT-3.5 by a sizable margin, which indicates the practicality of embodied task planning in general and complex environments.
PDF50December 15, 2024