測試時計算的逆縮放
Inverse Scaling in Test-Time Compute
July 19, 2025
作者: Aryo Pradipta Gema, Alexander Hägele, Runjin Chen, Andy Arditi, Jacob Goldman-Wetzler, Kit Fraser-Taliente, Henry Sleight, Linda Petrini, Julian Michael, Beatrice Alex, Pasquale Minervini, Yanda Chen, Joe Benton, Ethan Perez
cs.AI
摘要
我們構建了一系列評估任務,其中延長大型推理模型(LRMs)的推理長度會導致性能下降,展現出測試時計算量與準確性之間的反向比例關係。這些評估任務涵蓋四個類別:帶有干擾項的簡單計數任務、含有虛假特徵的回歸任務、需跟蹤約束條件的演繹推理任務,以及高級人工智能風險評估。我們識別出模型在進行更長推理時的五種不同失效模式:1)Claude模型對無關信息的注意力逐漸分散;2)OpenAI o系列模型雖能抵抗干擾,但過度擬合問題框架;3)模型從合理的先驗轉向虛假相關性;4)所有模型在處理複雜演繹任務時均表現出難以保持專注;5)延長推理可能加劇令人擔憂的行為,如Claude Sonnet 4表現出增強的自我保護表達。這些發現表明,儘管測試時計算量的擴展對於提升模型能力仍具潛力,但它可能無意中強化有問題的推理模式。我們的結果強調了在不同推理長度下評估模型的重要性,以便識別並解決LRMs中的這些失效模式。
English
We construct evaluation tasks where extending the reasoning length of Large
Reasoning Models (LRMs) deteriorates performance, exhibiting an inverse scaling
relationship between test-time compute and accuracy. Our evaluation tasks span
four categories: simple counting tasks with distractors, regression tasks with
spurious features, deduction tasks with constraint tracking, and advanced AI
risks. We identify five distinct failure modes when models reason for longer:
1) Claude models become increasingly distracted by irrelevant information; 2)
OpenAI o-series models resist distractors but overfit to problem framings; 3)
models shift from reasonable priors to spurious correlations; 4) all models
show difficulties in maintaining focus on complex deductive tasks; and 5)
extended reasoning may amplify concerning behaviors, with Claude Sonnet 4
showing increased expressions of self-preservation. These findings suggest that
while test-time compute scaling remains promising for improving model
capabilities, it may inadvertently reinforce problematic reasoning patterns.
Our results demonstrate the importance of evaluating models across diverse
reasoning lengths to identify and address these failure modes in LRMs.