層次化預算策略優化於自適應推理
Hierarchical Budget Policy Optimization for Adaptive Reasoning
July 21, 2025
作者: Shangke Lyu, Linjuan Wu, Yuchen Yan, Xingyu Wu, Hao Li, Yongliang Shen, Peisheng Jiang, Weiming Lu, Jun Xiao, Yueting Zhuang
cs.AI
摘要
大型推理模型通过广泛的思维链生成取得了显著性能,但由于无论问题复杂度如何都采用统一的推理策略,表现出显著的计算效率低下。我们提出了分层预算策略优化(HBPO),这是一个强化学习框架,使模型能够在不牺牲能力的情况下学习特定问题的推理深度。HBPO解决了效率导向训练中探索空间崩溃的根本挑战,其中对长输出长度的惩罚系统地使模型偏离必要的长推理路径。通过分层预算探索,我们的方法将滚动样本划分为具有不同令牌预算的多个子组,旨在实现资源的高效分配,同时防止能力下降。我们引入了差异化的奖励机制,创建与问题复杂度相一致的预算感知激励,使模型能够发现任务需求与计算努力之间的自然对应关系。大量实验表明,HBPO在四个推理基准上将平均令牌使用量减少了高达60.6%,同时将准确率提高了3.14%。与现有方法不同,HBPO不施加外部约束或依赖离散模式选择,而是表现出一种新兴的自适应行为,模型根据问题复杂度自动调整推理深度。我们的结果表明,推理效率和能力并非本质冲突,通过适当结构化的分层训练,可以在保持探索多样性的同时同时优化两者。
English
Large reasoning models achieve remarkable performance through extensive
chain-of-thought generation, yet exhibit significant computational inefficiency
by applying uniform reasoning strategies regardless of problem complexity. We
present Hierarchical Budget Policy Optimization (HBPO), a reinforcement
learning framework that enables models to learn problem-specific reasoning
depths without sacrificing capability. HBPO addresses the fundamental challenge
of exploration space collapse in efficiency-oriented training, where penalties
on long output length systematically bias models away from necessary long
reasoning paths. Through hierarchical budget exploration, our approach
partitions rollout samples into multiple subgroups with distinct token budgets,
aiming to enable efficient resource allocation while preventing degradation of
capability. We introduce differentiated reward mechanisms that create
budget-aware incentives aligned with the complexity of the problem, allowing
models to discover natural correspondences between task requirements and
computational effort. Extensive experiments demonstrate that HBPO reduces
average token usage by up to 60.6% while improving accuracy by 3.14% across
four reasoning benchmarks. Unlike existing methods that impose external
constraints or rely on discrete mode selection, HBPO exhibits emergent adaptive
behavior where models automatically adjust reasoning depth based on problem
complexity. Our results suggest that reasoning efficiency and capability are
not inherently conflicting, and can be simultaneously optimized through
appropriately structured hierarchical training that preserves exploration
diversity.