ChatPaper.aiChatPaper

StepFun-Formalizer:通過知識推理融合釋放大型語言模型的自動形式化潛能

StepFun-Formalizer: Unlocking the Autoformalization Potential of LLMs through Knowledge-Reasoning Fusion

August 6, 2025
作者: Yutong Wu, Di Huang, Ruosi Wan, Yue Peng, Shijie Shang, Chenrui Cao, Lei Qi, Rui Zhang, Zidong Du, Jie Yan, Xing Hu
cs.AI

摘要

自動形式化旨在將自然語言中的數學陳述轉化為形式語言。儘管大型語言模型(LLMs)已加速了這一領域的進展,現有方法仍面臨準確性不足的問題。我們識別出有效自動形式化的兩項關鍵能力:對形式語言領域知識的全面掌握,以及自然語言問題理解與非正式-正式對齊的推理能力。缺乏前者,模型無法識別正確的形式對象;缺少後者,模型則難以解讀現實世界語境並將其精確映射為形式表達。為彌補這些不足,我們引入了ThinkingF,一個提升這兩種能力的數據合成與訓練流程。首先,我們構建了兩個數據集:一個通過提煉和篩選富含形式知識的大規模實例,另一個則基於專家設計的模板生成非正式到正式的推理軌跡。隨後,我們利用這些數據集進行監督微調(SFT)和強化學習價值回饋(RLVR),以進一步融合和精煉這兩種能力。最終得到的7B和32B模型展現了全面的形式知識和強大的非正式到正式推理能力。值得注意的是,StepFun-Formalizer-32B在FormalMATH-Lite上達到了40.5%的SOTA BEq@1分數,在ProverBench上達到了26.7%,超越了所有先前的通用和專用模型。
English
Autoformalization aims to translate natural-language mathematical statements into a formal language. While LLMs have accelerated progress in this area, existing methods still suffer from low accuracy. We identify two key abilities for effective autoformalization: comprehensive mastery of formal-language domain knowledge, and reasoning capability of natural language problem understanding and informal-formal alignment. Without the former, a model cannot identify the correct formal objects; without the latter, it struggles to interpret real-world contexts and map them precisely into formal expressions. To address these gaps, we introduce ThinkingF, a data synthesis and training pipeline that improves both abilities. First, we construct two datasets: one by distilling and selecting large-scale examples rich in formal knowledge, and another by generating informal-to-formal reasoning trajectories guided by expert-designed templates. We then apply SFT and RLVR with these datasets to further fuse and refine the two abilities. The resulting 7B and 32B models exhibit both comprehensive formal knowledge and strong informal-to-formal reasoning. Notably, StepFun-Formalizer-32B achieves SOTA BEq@1 scores of 40.5% on FormalMATH-Lite and 26.7% on ProverBench, surpassing all prior general-purpose and specialized models.
PDF23August 7, 2025