ChatPaper.aiChatPaper

GPT-5是否實現了空間智能?一項實證研究

Has GPT-5 Achieved Spatial Intelligence? An Empirical Study

August 18, 2025
作者: Zhongang Cai, Yubo Wang, Qingping Sun, Ruisi Wang, Chenyang Gu, Wanqi Yin, Zhiqian Lin, Zhitao Yang, Chen Wei, Xuanke Shi, Kewang Deng, Xiaoyang Han, Zukai Chen, Jiaqi Li, Xiangyu Fan, Hanming Deng, Lewei Lu, Bo Li, Ziwei Liu, Quan Wang, Dahua Lin, Lei Yang
cs.AI

摘要

近年來,多模態模型取得了顯著的進展。然而,這些模型在空間理解和推理方面仍存在明顯的局限性,而這些能力是實現人工通用智能的基礎。隨著近期GPT-5的發布,據稱是目前最強大的AI模型,現在正是審視領先模型在空間智能發展路徑上處於何種位置的適當時機。首先,我們提出了一個統合現有基準的空間任務綜合分類法,並討論了確保公平評估的挑戰。接著,我們在八個關鍵基準上評估了最先進的專有和開源模型,總計消耗了超過十億個token。我們的實證研究揭示:(1) GPT-5在空間智能方面展現了前所未有的能力,但(2)在廣泛的任務範圍內仍未能達到人類水平。此外,我們(3)識別出了對多模態模型更具挑戰性的空間智能問題,並且(4)專有模型在面對最困難的問題時並未表現出決定性優勢。此外,我們還對一系列多樣化的場景進行了定性評估,這些場景對人類來說直觀易懂,卻連最先進的多模態模型也無法應對。
English
Multi-modal models have achieved remarkable progress in recent years. Nevertheless, they continue to exhibit notable limitations in spatial understanding and reasoning, which are fundamental capabilities to achieving artificial general intelligence. With the recent release of GPT-5, allegedly the most powerful AI model to date, it is timely to examine where the leading models stand on the path toward spatial intelligence. First, we propose a comprehensive taxonomy of spatial tasks that unifies existing benchmarks and discuss the challenges in ensuring fair evaluation. We then evaluate state-of-the-art proprietary and open-source models on eight key benchmarks, at a cost exceeding one billion total tokens. Our empirical study reveals that (1) GPT-5 demonstrates unprecedented strength in spatial intelligence, yet (2) still falls short of human performance across a broad spectrum of tasks. Moreover, we (3) identify the more challenging spatial intelligence problems for multi-modal models, and (4) proprietary models do not exhibit a decisive advantage when facing the most difficult problems. In addition, we conduct a qualitative evaluation across a diverse set of scenarios that are intuitive for humans yet fail even the most advanced multi-modal models.
PDF232August 19, 2025