We-Math:您的大型多模型是否實現了類似人類的數學推理?
We-Math: Does Your Large Multimodal Model Achieve Human-like Mathematical Reasoning?
July 1, 2024
作者: Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, Runfeng Qiao, Yifan Zhang, Xiao Zong, Yida Xu, Muxi Diao, Zhimin Bao, Chen Li, Honggang Zhang
cs.AI
摘要
視覺數學推理作為一種基本的視覺推理能力,受到大型多模型(LMMs)社群的廣泛關注。現有的基準測試,如MathVista和MathVerse,更注重結果導向的表現,但忽略了知識獲取和泛化中的基本原則。受到類人數學推理的啟發,我們引入了WE-MATH,這是第一個專門設計來探索超越端對端表現的解決問題原則的基準測試。我們精心收集和分類了6.5K個視覺數學問題,涵蓋了67個階層式知識概念和五個知識細節層次。我們根據所需的知識概念將複合問題分解為子問題,並引入了一個新穎的四維指標,即不足知識(IK)、不足泛化(IG)、完全掌握(CM)和機械記憶(RM),以階層式評估LMMs推理過程中的固有問題。通過WE-MATH,我們對現有的LMMs在視覺數學推理方面進行了全面評估,並揭示了解決步驟與特定問題表現之間的負相關。我們確認LMMs的IK問題可以通過知識擴充策略有效改善。更重要的是,GPT-4o的主要挑戰顯著從IK轉變為IG,使其成為首個邁向知識泛化階段的LMM。相比之下,其他LMMs明顯傾向於機械記憶 - 它們能正確解決涉及多個知識概念的複合問題,但無法回答子問題。我們預期WE-MATH將為LMMs在視覺數學推理方面的進展開辟新途徑。WE-MATH的數據和評估代碼可在https://github.com/We-Math/We-Math 上獲得。
English
Visual mathematical reasoning, as a fundamental visual reasoning ability, has
received widespread attention from the Large Multimodal Models (LMMs)
community. Existing benchmarks, such as MathVista and MathVerse, focus more on
the result-oriented performance but neglect the underlying principles in
knowledge acquisition and generalization. Inspired by human-like mathematical
reasoning, we introduce WE-MATH, the first benchmark specifically designed to
explore the problem-solving principles beyond end-to-end performance. We
meticulously collect and categorize 6.5K visual math problems, spanning 67
hierarchical knowledge concepts and five layers of knowledge granularity. We
decompose composite problems into sub-problems according to the required
knowledge concepts and introduce a novel four-dimensional metric, namely
Insufficient Knowledge (IK), Inadequate Generalization (IG), Complete Mastery
(CM), and Rote Memorization (RM), to hierarchically assess inherent issues in
LMMs' reasoning process. With WE-MATH, we conduct a thorough evaluation of
existing LMMs in visual mathematical reasoning and reveal a negative
correlation between solving steps and problem-specific performance. We confirm
the IK issue of LMMs can be effectively improved via knowledge augmentation
strategies. More notably, the primary challenge of GPT-4o has significantly
transitioned from IK to IG, establishing it as the first LMM advancing towards
the knowledge generalization stage. In contrast, other LMMs exhibit a marked
inclination towards Rote Memorization - they correctly solve composite problems
involving multiple knowledge concepts yet fail to answer sub-problems. We
anticipate that WE-MATH will open new pathways for advancements in visual
mathematical reasoning for LMMs. The WE-MATH data and evaluation code are
available at https://github.com/We-Math/We-Math.