CogFlow:透過知識內化實現視覺數學問題解決中的感知與推理橋接
CogFlow: Bridging Perception and Reasoning through Knowledge Internalization for Visual Mathematical Problem Solving
January 5, 2026
作者: Shuhang Chen, Yunqiu Xu, Junjie Xie, Aojun Lu, Tao Feng, Zeying Huang, Ning Zhang, Yi Sun, Yi Yang, Hangjie Yuan
cs.AI
摘要
儘管已取得顯著進展,多模態大型語言模型在視覺數學問題解決方面仍面臨挑戰。近期研究雖認識到視覺感知是數學視覺推理的瓶頸,但其解決方案僅限於改進視覺輸入的提取與解讀,且均忽略了一個關鍵問題:提取的視覺線索是否被忠實整合並有效運用於後續推理。受此啟發,我們提出CogFlow——一個受認知科學啟發的新型三階段框架,通過增設知識內化階段顯式模擬人類推理的層次化流程:感知⇒內化⇒推理。順應此層次化流程,我們對各階段進行整體增強:設計協同視覺獎勵機制,在參數空間與語義空間共同提升符號與圖表視覺信息提取能力;在內化階段引入知識內化獎勵模型,確保提取的視覺線索能忠實銜接後續推理;此外,提出視覺門控策略優化算法,強化推理過程與視覺知識的錨定,防止模型產生表面連貫但缺乏視覺依據的捷徑推理。我們還貢獻了新數據集MathCog用於模型訓練,包含超過12萬個具備高質量感知-推理對齊標註的樣本。在常用視覺數學推理基準上的綜合實驗與分析驗證了CogFlow的優越性。
English
Despite significant progress, multimodal large language models continue to struggle with visual mathematical problem solving. Some recent works recognize that visual perception is a bottleneck in visual mathematical reasoning, but their solutions are limited to improving the extraction and interpretation of visual inputs. Notably, they all ignore the key issue of whether the extracted visual cues are faithfully integrated and properly utilized in subsequent reasoning. Motivated by this, we present CogFlow, a novel cognitive-inspired three-stage framework that incorporates a knowledge internalization stage, explicitly simulating the hierarchical flow of human reasoning: perceptionRightarrowinternalizationRightarrowreasoning. Inline with this hierarchical flow, we holistically enhance all its stages. We devise Synergistic Visual Rewards to boost perception capabilities in parametric and semantic spaces, jointly improving visual information extraction from symbols and diagrams. To guarantee faithful integration of extracted visual cues into subsequent reasoning, we introduce a Knowledge Internalization Reward model in the internalization stage, bridging perception and reasoning. Moreover, we design a Visual-Gated Policy Optimization algorithm to further enforce the reasoning is grounded with the visual knowledge, preventing models seeking shortcuts that appear coherent but are visually ungrounded reasoning chains. Moreover, we contribute a new dataset MathCog for model training, which contains samples with over 120K high-quality perception-reasoning aligned annotations. Comprehensive experiments and analysis on commonly used visual mathematical reasoning benchmarks validate the superiority of the proposed CogFlow.