多樣推理與驗證以進行高級推理
Diverse Inference and Verification for Advanced Reasoning
February 14, 2025
作者: Iddo Drori, Gaston Longhitano, Mao Mao, Seunghwan Hyun, Yuke Zhang, Sungjun Park, Zachary Meeks, Xin-Yu Zhang, Ben Segev, Howard Yong, Nakul Verma, Avi Shporer, Alon Amit, Madeleine Udell
cs.AI
摘要
像OpenAI o1、o3和DeepSeek R1這樣的推理LLM在數學和編碼方面取得了顯著進展,但在挑戰性的高級任務上仍然遇到困難,例如國際數學奧林匹克(IMO)的組合問題,抽象和推理語料庫(ARC)的謎題,以及人類最後考試(HLE)的問題。我們採用多元推理方法,在測試時結合多個模型和方法。我們發現驗證數學和代碼問題,以及在其他問題上進行拒絕採樣是簡單且有效的。我們通過Lean自動驗證IMO問題的解答正確性,通過代碼驗證ARC謎題的正確性,並發現最佳N有效地回答了HLE問題。我們的方法將IMO組合問題的答案準確率從33.3%提高到77.8%,將HLE問題的準確率從8%提高到37%,解決了948名人類無法解決的80%的ARC謎題,以及o3高計算無法解決的26.5%的ARC謎題。測試時模擬、強化學習和具有推理反饋的元學習通過調整代理圖表示和變化提示、代碼和數據集來改進泛化能力。我們的方法可靠、強大且可擴展,在可重現研究的精神下,我們將在發表後公開提供。
English
Reasoning LLMs such as OpenAI o1, o3 and DeepSeek R1 have made significant
progress in mathematics and coding, yet find challenging advanced tasks such as
International Mathematical Olympiad (IMO) combinatorics problems, Abstraction
and Reasoning Corpus (ARC) puzzles, and Humanity's Last Exam (HLE) questions.
We use a diverse inference approach that combines multiple models and methods
at test time. We find that verifying mathematics and code problems, and
rejection sampling on other problems is simple and effective. We automatically
verify correctness of solutions to IMO problems by Lean, and ARC puzzles by
code, and find that best-of-N effectively answers HLE questions. Our approach
increases answer accuracy on IMO combinatorics problems from 33.3% to 77.8%,
accuracy on HLE questions from 8% to 37%, and solves 80% of ARC puzzles that
948 humans could not and 26.5% of ARC puzzles that o3 high compute does not.
Test-time simulations, reinforcement learning, and meta-learning with inference
feedback improve generalization by adapting agent graph representations and
varying prompts, code, and datasets. Our approach is reliable, robust, and
scalable, and in the spirit of reproducible research, we will make it publicly
available upon publication.Summary
AI-Generated Summary