ChatPaper.aiChatPaper

3D-VLA:一种3D视觉-语言-动作生成世界模型

3D-VLA: A 3D Vision-Language-Action Generative World Model

March 14, 2024
作者: Haoyu Zhen, Xiaowen Qiu, Peihao Chen, Jincheng Yang, Xin Yan, Yilun Du, Yining Hong, Chuang Gan
cs.AI

摘要

最近的视觉-语言-动作(VLA)模型依赖于2D输入,缺乏与更广泛的3D物理世界的整合。此外,它们通过学习从感知到动作的直接映射来执行动作预测,忽略了世界的广泛动态以及动作与动态之间的关系。相比之下,人类拥有描绘未来场景想象以相应规划行动的世界模型。为此,我们提出了3D-VLA,通过引入一系列新的具身基础模型,无缝地将3D感知、推理和动作通过生成式世界模型进行连接。具体而言,3D-VLA建立在基于3D的大型语言模型(LLM)之上,并引入一组交互标记以与具身环境进行互动。此外,为了向模型注入生成能力,我们训练了一系列具身扩散模型,并将它们与LLM对齐,用于预测目标图像和点云。为了训练我们的3D-VLA,我们通过从现有机器人数据集中提取大量3D相关信息来策划了一个大规模的3D具身指令数据集。我们在保留数据集上的实验表明,3D-VLA显著改善了具身环境中的推理、多模态生成和规划能力,展示了它在实际应用中的潜力。
English
Recent vision-language-action (VLA) models rely on 2D inputs, lacking integration with the broader realm of the 3D physical world. Furthermore, they perform action prediction by learning a direct mapping from perception to action, neglecting the vast dynamics of the world and the relations between actions and dynamics. In contrast, human beings are endowed with world models that depict imagination about future scenarios to plan actions accordingly. To this end, we propose 3D-VLA by introducing a new family of embodied foundation models that seamlessly link 3D perception, reasoning, and action through a generative world model. Specifically, 3D-VLA is built on top of a 3D-based large language model (LLM), and a set of interaction tokens is introduced to engage with the embodied environment. Furthermore, to inject generation abilities into the model, we train a series of embodied diffusion models and align them into the LLM for predicting the goal images and point clouds. To train our 3D-VLA, we curate a large-scale 3D embodied instruction dataset by extracting vast 3D-related information from existing robotics datasets. Our experiments on held-in datasets demonstrate that 3D-VLA significantly improves the reasoning, multimodal generation, and planning capabilities in embodied environments, showcasing its potential in real-world applications.

Summary

AI-Generated Summary

PDF101December 15, 2024