ChatPaper.aiChatPaper

BEAR:面向原子化具身能力的多模态语言模型基准测试与性能提升

BEAR: Benchmarking and Enhancing Multimodal Language Models for Atomic Embodied Capabilities

October 9, 2025
作者: Yu Qi, Haibo Zhao, Ziyu Guo, Siyuan Ma, Ziyan Chen, Yaokun Han, Renrui Zhang, Zitiantao Lin, Shiji Xin, Yijian Huang, Kai Cheng, Peiheng Wang, Jiazheng Liu, Jiayi Zhang, Yizhe Zhu, Wenqing Wang, Yiran Qin, Xupeng Zhu, Haojie Huang, Lawson L. S. Wong
cs.AI

摘要

具身能力是指智能體感知、理解並與物理世界互動的一系列基本能力。儘管多模態大語言模型(MLLMs)作為具身智能體展現出潛力,但對其具身能力的全面系統評估仍顯不足,現有基準主要集中於特定領域,如規劃或空間理解。為彌補這一差距,我們引入了BEAR,這是一個全面且細緻的基準,用於評估MLLMs在原子級具身能力上的表現。BEAR涵蓋了14個領域、6大類別的4,469個交織圖像-視頻-文本條目,任務範圍從低層次的指向、軌跡理解、空間推理,到高層次的規劃。對20個代表性MLLMs的廣泛評估結果揭示了它們在所有具身能力領域中的持續侷限性。為應對這一不足,我們提出了BEAR-Agent,這是一個多模態可對話智能體,它整合了預訓練的視覺模型,以增強MLLM的感知、三維理解和規劃能力。這顯著提升了MLLM在BEAR上多樣化具身能力的表現,實現了9.12%的絕對增益,並在GPT-5上取得了17.5%的相對提升。此外,我們的實驗表明,提升MLLM的具身能力能夠有益於模擬環境中的具身任務。項目網站:https://bear-official66.github.io/
English
Embodied capabilities refer to a suite of fundamental abilities for an agent to perceive, comprehend, and interact with the physical world. While multimodal large language models (MLLMs) show promise as embodied agents, a thorough and systematic evaluation of their embodied capabilities remains underexplored, as existing benchmarks primarily focus on specific domains such as planning or spatial understanding. To bridge this gap, we introduce BEAR, a comprehensive and fine-grained benchmark that evaluates MLLMs on atomic embodied capabilities. BEAR comprises 4,469 interleaved image-video-text entries across 14 domains in 6 categories, including tasks from low-level pointing, trajectory understanding, spatial reasoning, to high-level planning. Extensive evaluation results of 20 representative MLLMs reveal their persistent limitations across all domains of embodied capabilities. To tackle the shortfall, we propose BEAR-Agent, a multimodal conversable agent that integrates pretrained vision models to strengthen MLLM perception, 3D understanding, and planning capabilities. It substantially enhances MLLM performance across diverse embodied capabilities on BEAR, yielding a 9.12% absolute gain and a relative improvement of 17.5% on GPT-5. Furthermore, our experiments indicate that improving MLLM embodied capabilities can benefit embodied tasks in simulated environments. Project website: https://bear-official66.github.io/
PDF442October 13, 2025