GR-3 技術報告
GR-3 Technical Report
July 21, 2025
作者: Chilam Cheang, Sijin Chen, Zhongren Cui, Yingdong Hu, Liqun Huang, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Xiao Ma, Hao Niu, Wenxuan Ou, Wanli Peng, Zeyu Ren, Haixin Shi, Jiawen Tian, Hongtao Wu, Xin Xiao, Yuyang Xiao, Jiafeng Xu, Yichu Yang
cs.AI
摘要
我們報告了在構建通用機器人策略方面的最新進展,即GR-3的開發。GR-3是一個大規模的視覺-語言-動作(VLA)模型。它展示了在泛化到新物體、環境及涉及抽象概念的指令方面具有卓越能力。此外,它能夠通過最少的人類軌跡數據進行高效微調,從而實現快速且經濟高效地適應新環境。GR-3在處理長期視野和精細任務方面也表現出色,包括需要雙手機器人操作和移動的任務,展現了強健且可靠的性能。這些能力是通過多方面的訓練方案實現的,包括與網絡規模的視覺-語言數據共同訓練、通過VR設備收集的人類軌跡數據進行高效微調,以及利用機器人軌跡數據進行有效的模仿學習。此外,我們還介紹了ByteMini,這是一款多功能雙手機器人,設計上具有卓越的靈活性和可靠性,當與GR-3集成時,能夠完成廣泛的任務。通過大量的現實世界實驗,我們展示了GR-3在多種具有挑戰性的任務上超越了最先進的基線方法pi_0。我們希望GR-3能夠作為邁向構建能夠協助人類日常生活的通用機器人的一步。
English
We report our recent progress towards building generalist robot policies, the
development of GR-3. GR-3 is a large-scale vision-language-action (VLA) model.
It showcases exceptional capabilities in generalizing to novel objects,
environments, and instructions involving abstract concepts. Furthermore, it can
be efficiently fine-tuned with minimal human trajectory data, enabling rapid
and cost-effective adaptation to new settings. GR-3 also excels in handling
long-horizon and dexterous tasks, including those requiring bi-manual
manipulation and mobile movement, showcasing robust and reliable performance.
These capabilities are achieved through a multi-faceted training recipe that
includes co-training with web-scale vision-language data, efficient fine-tuning
from human trajectory data collected via VR devices, and effective imitation
learning with robot trajectory data. In addition, we introduce ByteMini, a
versatile bi-manual mobile robot designed with exceptional flexibility and
reliability, capable of accomplishing a wide range of tasks when integrated
with GR-3. Through extensive real-world experiments, we show GR-3 surpasses the
state-of-the-art baseline method, pi_0, on a wide variety of challenging
tasks. We hope GR-3 can serve as a step towards building generalist robots
capable of assisting humans in daily life.