透過潛在引導向量進行分數推理提升推論時間的計算效率
Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute
June 18, 2025
作者: Sheng Liu, Tianlang Chen, Pan Lu, Haotian Ye, Yizheng Chen, Lei Xing, James Zou
cs.AI
摘要
測試時計算已成為提升大型語言模型(LLMs)性能的一種強大範式,其中生成多個輸出或精煉個別推理鏈能顯著提高答案準確性。然而,現有方法如Best-of-N、多數投票和自我反思通常以統一的方式應用推理於所有輸入,忽略了不同問題可能需要不同深度的推理這一事實。在本研究中,我們提出了分數推理(Fractional Reasoning),這是一個無需訓練且與模型無關的框架,它能在推理時實現對推理強度的連續控制,超越了固定指令提示的限制。我們的方法通過提取與更深層次推理相關的潛在引導向量,並以可調節的縮放因子重新應用它,使模型能夠根據每個輸入的複雜度調整其推理過程。這支持了兩種關鍵的測試時縮放模式:(1) 在基於廣度的策略(如Best-of-N、多數投票)中提升輸出質量,以及(2) 在基於深度的策略(如自我反思)中增強個別推理鏈的正確性。在GSM8K、MATH500和GPQA上的實驗表明,分數推理在多樣化的推理任務和模型中持續提升了性能。
English
Test-time compute has emerged as a powerful paradigm for improving the
performance of large language models (LLMs), where generating multiple outputs
or refining individual chains can significantly boost answer accuracy. However,
existing methods like Best-of-N, majority voting, and self-reflection typically
apply reasoning in a uniform way across inputs, overlooking the fact that
different problems may require different levels of reasoning depth. In this
work, we propose Fractional Reasoning, a training-free and model-agnostic
framework that enables continuous control over reasoning intensity at inference
time, going beyond the limitations of fixed instructional prompts. Our method
operates by extracting the latent steering vector associated with deeper
reasoning and reapplying it with a tunable scaling factor, allowing the model
to tailor its reasoning process to the complexity of each input. This supports
two key modes of test-time scaling: (1) improving output quality in
breadth-based strategies (e.g., Best-of-N, majority voting), and (2) enhancing
the correctness of individual reasoning chains in depth-based strategies (e.g.,
self-reflection). Experiments on GSM8K, MATH500, and GPQA demonstrate that
Fractional Reasoning consistently improves performance across diverse reasoning
tasks and models.