ChatPaper.aiChatPaper

NORA:一個小型開源的通用視覺語言行動模型,專為具身任務設計

NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks

April 28, 2025
作者: Chia-Yu Hung, Qi Sun, Pengfei Hong, Amir Zadeh, Chuan Li, U-Xuan Tan, Navonil Majumder, Soujanya Poria
cs.AI

摘要

現有的視覺-語言-動作(VLA)模型在零樣本場景中展現了優異的性能,顯示出令人印象深刻的任務執行和推理能力。然而,視覺編碼的局限性帶來了重大挑戰,這可能導致在諸如物體抓取等任務中失敗。此外,這些模型通常由於其龐大的規模(參數數量往往超過70億)而面臨高計算開銷的問題。儘管這些模型在推理和任務規劃方面表現出色,但其巨大的計算開銷使得它們在實時機器人環境中不切實際,因為在這些環境中速度和效率至關重要。為了解決現有VLA模型的局限性,我們提出了NORA,這是一個擁有30億參數的模型,旨在減少計算開銷的同時保持強大的任務性能。NORA採用Qwen-2.5-VL-3B多模態模型作為其核心,利用其卓越的視覺語義理解能力來增強視覺推理和動作基礎。此外,我們的模型在97萬次真實世界機器人演示數據上進行訓練,並配備了FAST+分詞器以高效生成動作序列。實驗結果表明,NORA在顯著降低計算開銷的情況下,超越了現有的大規模VLA模型,實現了更好的任務性能,使其成為實時機器人自主性的一個更為實用的解決方案。
English
Existing Visual-Language-Action (VLA) models have shown promising performance in zero-shot scenarios, demonstrating impressive task execution and reasoning capabilities. However, a significant challenge arises from the limitations of visual encoding, which can result in failures during tasks such as object grasping. Moreover, these models typically suffer from high computational overhead due to their large sizes, often exceeding 7B parameters. While these models excel in reasoning and task planning, the substantial computational overhead they incur makes them impractical for real-time robotic environments, where speed and efficiency are paramount. To address the limitations of existing VLA models, we propose NORA, a 3B-parameter model designed to reduce computational overhead while maintaining strong task performance. NORA adopts the Qwen-2.5-VL-3B multimodal model as its backbone, leveraging its superior visual-semantic understanding to enhance visual reasoning and action grounding. Additionally, our is trained on 970k real-world robot demonstrations and equipped with the FAST+ tokenizer for efficient action sequence generation. Experimental results demonstrate that NORA outperforms existing large-scale VLA models, achieving better task performance with significantly reduced computational overhead, making it a more practical solution for real-time robotic autonomy.

Summary

AI-Generated Summary

PDF52April 29, 2025