ChatPaper.aiChatPaper

語言模型能否進行反例構建?透過反例創建評估算法推理能力

Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation

February 26, 2025
作者: Shiven Sinha, Shashwat Goel, Ponnurangam Kumaraguru, Jonas Geiping, Matthias Bethge, Ameya Prabhu
cs.AI

摘要

對於語言模型(LMs)加速科學發現的潛力,人們的熱情日益高漲。證偽假設是科學進步的關鍵,因為它使主張能夠隨著時間迭代精煉。這一過程需要研究者付出大量的努力、推理和創造力。然而,當前的語言模型基準主要評估其生成解決方案的能力,而非挑戰這些方案的能力。我們主張開發能評估這種逆向能力的基準——為微妙錯誤的解決方案創建反例。為展示這一方法,我們從算法問題解決領域入手,在此領域中,反例可通過代碼執行自動評估。具體而言,我們引入了REFUTE,這是一個動態更新的基準,包含來自編程競賽的最新問題及錯誤提交,其中人類專家成功識別了反例。我們的分析發現,即便是配備了代碼執行反饋的最優推理代理,如OpenAI o3-mini(高級版),也只能為REFUTE中不到9%的錯誤解決方案創建反例,儘管評分顯示其能從零開始解決高達48%的這些問題。我們希望這項工作能推動在評估和提升語言模型證偽錯誤解決方案能力方面的進展——這一能力對於加速研究以及使模型通過可靠的反思推理實現自我提升至關重要。
English
There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. Falsifying hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. Yet current benchmarks for LMs predominantly assess their ability to generate solutions rather than challenge them. We advocate for developing benchmarks that evaluate this inverse capability - creating counterexamples for subtly incorrect solutions. To demonstrate this approach, we start with the domain of algorithmic problem solving, where counterexamples can be evaluated automatically using code execution. Specifically, we introduce REFUTE, a dynamically updating benchmark that includes recent problems and incorrect submissions from programming competitions, where human experts successfully identified counterexamples. Our analysis finds that the best reasoning agents, even OpenAI o3-mini (high) with code execution feedback, can create counterexamples for only <9% of incorrect solutions in REFUTE, even though ratings indicate its ability to solve up to 48% of these problems from scratch. We hope our work spurs progress in evaluating and enhancing LMs' ability to falsify incorrect solutions - a capability that is crucial for both accelerating research and making models self-improve through reliable reflective reasoning.

Summary

AI-Generated Summary

PDF202February 27, 2025