ChatPaper.aiChatPaper

LLM-Grounder:使用大型语言模型作为代理的开放词汇3D视觉定位

LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent

September 21, 2023
作者: Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F. Fouhey, Joyce Chai
cs.AI

摘要

3D视觉定位是家用机器人的关键技能,使它们能够在环境中导航、操作物体,并根据环境回答问题。虽然现有方法通常依赖于大量标记数据或在处理复杂语言查询方面存在局限性,但我们提出了LLM-Grounder,这是一种新颖的零样本、开放词汇量、基于大型语言模型(LLM)的3D视觉定位管线。LLM-Grounder利用LLM将复杂的自然语言查询分解为语义成分,并采用视觉定位工具,如OpenScene或LERF,来识别3D场景中的物体。然后,LLM评估所提出物体之间的空间和常识关系,以做出最终的定位决策。我们的方法不需要任何标记的训练数据,可以泛化到新颖的3D场景和任意文本查询。我们在ScanRefer基准测试上评估了LLM-Grounder,并展示了最先进的零样本定位准确性。我们的研究结果表明,LLM显著提高了定位能力,特别是对于复杂语言查询,使LLM-Grounder成为机器人三维视觉语言任务的有效方法。项目网站https://chat-with-nerf.github.io/ 上提供了视频和交互演示。
English
3D visual grounding is a critical skill for household robots, enabling them to navigate, manipulate objects, and answer questions based on their environment. While existing approaches often rely on extensive labeled data or exhibit limitations in handling complex language queries, we propose LLM-Grounder, a novel zero-shot, open-vocabulary, Large Language Model (LLM)-based 3D visual grounding pipeline. LLM-Grounder utilizes an LLM to decompose complex natural language queries into semantic constituents and employs a visual grounding tool, such as OpenScene or LERF, to identify objects in a 3D scene. The LLM then evaluates the spatial and commonsense relations among the proposed objects to make a final grounding decision. Our method does not require any labeled training data and can generalize to novel 3D scenes and arbitrary text queries. We evaluate LLM-Grounder on the ScanRefer benchmark and demonstrate state-of-the-art zero-shot grounding accuracy. Our findings indicate that LLMs significantly improve the grounding capability, especially for complex language queries, making LLM-Grounder an effective approach for 3D vision-language tasks in robotics. Videos and interactive demos can be found on the project website https://chat-with-nerf.github.io/ .
PDF172December 15, 2024