AdaCtrl:基於難度感知預算的自適應與可控推理
AdaCtrl: Towards Adaptive and Controllable Reasoning via Difficulty-Aware Budgeting
May 24, 2025
作者: Shijue Huang, Hongru Wang, Wanjun Zhong, Zhaochen Su, Jiazhan Feng, Bowen Cao, Yi R. Fung
cs.AI
摘要
現代大型推理模型通過採用複雜的推理策略展現出令人印象深刻的問題解決能力。然而,這些模型往往難以在效率和效果之間取得平衡,經常為簡單問題生成不必要的冗長推理鏈。在本研究中,我們提出了AdaCtrl,這是一個新穎的框架,旨在支持難度感知的自適應推理預算分配以及用戶對推理深度的顯式控制。AdaCtrl根據自我評估的問題難度動態調整其推理長度,同時允許用戶手動控制預算,以優先考慮效率或效果。這是通過一個兩階段的訓練管道實現的:首先是初始的冷啟動微調階段,以培養模型自我感知難度並調整推理預算的能力,隨後是一個難度感知的強化學習(RL)階段,該階段在線訓練過程中根據模型不斷演進的能力來精煉其自適應推理策略並校準其難度評估。為了實現直觀的用戶交互,我們設計了顯式的長度觸發標籤,作為預算控制的一個自然界面。實證結果表明,與同樣包含微調和RL的標準訓練基線相比,AdaCtrl根據估計的難度調整推理長度,在需要精細推理的更具挑戰性的AIME2024和AIME2025數據集上,分別將響應長度減少了10.06%和12.14%,並在MATH500和GSM8K數據集上,分別減少了62.05%和91.04%,這些數據集更適合簡潔的響應。此外,AdaCtrl使用戶能夠精確控制推理預算,從而生成滿足特定需求的定制響應。
English
Modern large reasoning models demonstrate impressive problem-solving
capabilities by employing sophisticated reasoning strategies. However, they
often struggle to balance efficiency and effectiveness, frequently generating
unnecessarily lengthy reasoning chains for simple problems. In this work, we
propose AdaCtrl, a novel framework to support both difficulty-aware adaptive
reasoning budget allocation and explicit user control over reasoning depth.
AdaCtrl dynamically adjusts its reasoning length based on self-assessed problem
difficulty, while also allowing users to manually control the budget to
prioritize either efficiency or effectiveness. This is achieved through a
two-stage training pipeline: an initial cold-start fine-tuning phase to instill
the ability to self-aware difficulty and adjust reasoning budget, followed by a
difficulty-aware reinforcement learning (RL) stage that refines the model's
adaptive reasoning strategies and calibrates its difficulty assessments based
on its evolving capabilities during online training. To enable intuitive user
interaction, we design explicit length-triggered tags that function as a
natural interface for budget control. Empirical results show that AdaCtrl
adapts reasoning length based on estimated difficulty, compared to the standard
training baseline that also incorporates fine-tuning and RL, it yields
performance improvements and simultaneously reduces response length by 10.06%
and 12.14% on the more challenging AIME2024 and AIME2025 datasets, which
require elaborate reasoning, and by 62.05% and 91.04% on the MATH500 and GSM8K
datasets, where more concise responses are sufficient. Furthermore, AdaCtrl
enables precise user control over the reasoning budget, allowing for tailored
responses to meet specific needs.Summary
AI-Generated Summary