ChatPaper.aiChatPaper

ViNT:視覺導航的基礎模型

ViNT: A Foundation Model for Visual Navigation

June 26, 2023
作者: Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine
cs.AI

摘要

通用預訓練模型("基礎模型")使從業者能夠利用比從頭開始學習所需數據集小得多的數據集,為個別機器學習問題提供通用解決方案。這些模型通常在大型和多樣化的數據集上進行訓練,具有弱監督,消耗比任何單個下游應用可用的訓練數據多得多。在本文中,我們描述了視覺導航Transformer(ViNT),這是一個旨在將通用預訓練模型的成功帶入基於視覺的機器人導航的基礎模型。ViNT訓練時採用通用目標達成目標,可與任何導航數據集一起使用,並採用靈活的基於Transformer的架構來學習導航功能,實現對各種下游導航任務的有效適應。ViNT在許多現有導航數據集上進行訓練,包括來自各種不同機器人平台的數百小時機器人導航,表現出積極的轉移效果,優於在單一數據集上訓練的專家模型。ViNT可以通過基於擴散的子目標提案進行擴展,以探索新的環境,並且在配備長程啟發式時,可以解決公里級導航問題。ViNT還可以通過一種受提示調整啟發的技術適應新的任務規範,其中目標編碼器被替換為嵌入到相同目標令牌空間的另一任務模態(例如,GPS路徑點或路由命令的編碼)。這種靈活性和適應各種下游問題領域的能力確立了ViNT作為移動機器人的有效基礎模型。有關視頻、代碼和模型檢查點,請參見我們的項目頁面:https://visualnav-transformer.github.io。
English
General-purpose pre-trained models ("foundation models") have enabled practitioners to produce generalizable solutions for individual machine learning problems with datasets that are significantly smaller than those required for learning from scratch. Such models are typically trained on large and diverse datasets with weak supervision, consuming much more training data than is available for any individual downstream application. In this paper, we describe the Visual Navigation Transformer (ViNT), a foundation model that aims to bring the success of general-purpose pre-trained models to vision-based robotic navigation. ViNT is trained with a general goal-reaching objective that can be used with any navigation dataset, and employs a flexible Transformer-based architecture to learn navigational affordances and enable efficient adaptation to a variety of downstream navigational tasks. ViNT is trained on a number of existing navigation datasets, comprising hundreds of hours of robotic navigation from a variety of different robotic platforms, and exhibits positive transfer, outperforming specialist models trained on singular datasets. ViNT can be augmented with diffusion-based subgoal proposals to explore novel environments, and can solve kilometer-scale navigation problems when equipped with long-range heuristics. ViNT can also be adapted to novel task specifications with a technique inspired by prompt-tuning, where the goal encoder is replaced by an encoding of another task modality (e.g., GPS waypoints or routing commands) embedded into the same space of goal tokens. This flexibility and ability to accommodate a variety of downstream problem domains establishes ViNT as an effective foundation model for mobile robotics. For videos, code, and model checkpoints, see our project page at https://visualnav-transformer.github.io.
PDF70December 15, 2024