ViNT:视觉导航的基础模型
ViNT: A Foundation Model for Visual Navigation
June 26, 2023
作者: Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine
cs.AI
摘要
通用预训练模型(“基础模型”)使从业者能够利用比从头开始学习所需数据规模显著较小的数据集,为个别机器学习问题提供可泛化的解决方案。此类模型通常在大型和多样化的数据集上进行训练,采用弱监督,消耗比任何单个下游应用程序可用的训练数据要多得多。本文介绍了视觉导航Transformer(ViNT),这是一个旨在将通用预训练模型的成功带入基于视觉的机器人导航领域的基础模型。ViNT通过一个通用的目标达成目标进行训练,可与任何导航数据集一起使用,并采用灵活的基于Transformer的架构来学习导航功能并实现对各种下游导航任务的高效适应。ViNT在许多现有导航数据集上进行训练,包括来自各种不同机器人平台的数百小时的机器人导航,并展现出积极的迁移效果,胜过专门训练在单一数据集上的专家模型。ViNT可以通过基于扩散的子目标提议来探索新领域,并且在配备长程启发式方法时可以解决长达公里级别的导航问题。ViNT还可以通过一种受提示调整启发的技术来适应新的任务规范,其中目标编码器被替换为嵌入到相同目标令牌空间的另一任务模态(例如GPS航点或路由命令的编码)。ViNT的这种灵活性和适应各种下游问题领域的能力将其确立为移动机器人的有效基础模型。有关视频、代码和模型检查点,请访问我们的项目页面:https://visualnav-transformer.github.io。
English
General-purpose pre-trained models ("foundation models") have enabled
practitioners to produce generalizable solutions for individual machine
learning problems with datasets that are significantly smaller than those
required for learning from scratch. Such models are typically trained on large
and diverse datasets with weak supervision, consuming much more training data
than is available for any individual downstream application. In this paper, we
describe the Visual Navigation Transformer (ViNT), a foundation model that aims
to bring the success of general-purpose pre-trained models to vision-based
robotic navigation. ViNT is trained with a general goal-reaching objective that
can be used with any navigation dataset, and employs a flexible
Transformer-based architecture to learn navigational affordances and enable
efficient adaptation to a variety of downstream navigational tasks. ViNT is
trained on a number of existing navigation datasets, comprising hundreds of
hours of robotic navigation from a variety of different robotic platforms, and
exhibits positive transfer, outperforming specialist models trained on singular
datasets. ViNT can be augmented with diffusion-based subgoal proposals to
explore novel environments, and can solve kilometer-scale navigation problems
when equipped with long-range heuristics. ViNT can also be adapted to novel
task specifications with a technique inspired by prompt-tuning, where the goal
encoder is replaced by an encoding of another task modality (e.g., GPS
waypoints or routing commands) embedded into the same space of goal tokens.
This flexibility and ability to accommodate a variety of downstream problem
domains establishes ViNT as an effective foundation model for mobile robotics.
For videos, code, and model checkpoints, see our project page at
https://visualnav-transformer.github.io.