ChatPaper.aiChatPaper

正确思考:通过自适应注意力压缩学习缓解过度与不足思考

Think Right: Learning to Mitigate Under-Over Thinking via Adaptive, Attentive Compression

October 2, 2025
作者: Joykirat Singh, Justin Chih-Yao Chen, Archiki Prasad, Elias Stengel-Eskin, Akshay Nambi, Mohit Bansal
cs.AI

摘要

近期思维模型通过扩展测试时计算能力来解决复杂推理任务,但这种扩展必须与任务难度相匹配。一方面,过短的推理(欠思考)会导致在需要多步推理的难题上出错;而另一方面,过长的推理(过度思考)则可能造成令牌效率低下,即便在得出正确中间解后仍生成不必要的步骤。我们将此现象称为适应性不足,即模型无法根据问题难度适当调整其响应长度。为解决适应性不足并在欠思考与过度思考之间取得平衡,我们提出了TRAAC(通过自适应、注意力压缩实现正确思考),这是一种在线后训练强化学习方法,它利用模型在长推理轨迹上的自注意力机制来识别关键步骤并剪除冗余部分。TRAAC还估计任务难度,并将其融入训练奖励中,从而学会根据示例难度分配相应的推理资源。相较于基础模型及其他强化学习基线,我们的方法提升了准确性,减少了推理步骤,并实现了自适应思考。在多项任务(AIME、AMC、GPQA-D、BBEH)中,TRAAC(基于Qwen3-4B)相比基础模型平均绝对准确率提升8.4%,推理长度相对减少36.8%;与最佳强化学习基线相比,准确率提升7.9%,推理长度减少29.4%。TRAAC还展现出强大的泛化能力:尽管模型在数学数据集上训练,但在分布外的非数学数据集如GPQA-D、BBEH及OptimalThinkingBench上也实现了准确率和效率的提升。我们的分析进一步证实,TRAAC能基于难度对思考资源进行细粒度调整,且任务难度校准与基于注意力的压缩相结合,能在多样任务中带来收益。
English
Recent thinking models solve complex reasoning tasks by scaling test-time compute, but this scaling must be allocated in line with task difficulty. On one hand, short reasoning (underthinking) leads to errors on harder problems that require extended reasoning steps; but, excessively long reasoning (overthinking) can be token-inefficient, generating unnecessary steps even after reaching a correct intermediate solution. We refer to this as under-adaptivity, where the model fails to modulate its response length appropriately given problems of varying difficulty. To address under-adaptivity and strike a balance between under- and overthinking, we propose TRAAC (Think Right with Adaptive, Attentive Compression), an online post-training RL method that leverages the model's self-attention over a long reasoning trajectory to identify important steps and prune redundant ones. TRAAC also estimates difficulty and incorporates it into training rewards, thereby learning to allocate reasoning budget commensurate with example difficulty. Our approach improves accuracy, reduces reasoning steps, and enables adaptive thinking compared to base models and other RL baselines. Across a variety of tasks (AIME, AMC, GPQA-D, BBEH), TRAAC (Qwen3-4B) achieves an average absolute accuracy gain of 8.4% with a relative reduction in reasoning length of 36.8% compared to the base model, and a 7.9% accuracy gain paired with a 29.4% length drop compared to the best RL baseline. TRAAC also shows strong generalization: although our models are trained on math datasets, they show accuracy and efficiency gains on out-of-distribution non-math datasets like GPQA-D, BBEH, and OptimalThinkingBench. Our analysis further verifies that TRAAC provides fine-grained adjustments to thinking budget based on difficulty and that a combination of task-difficulty calibration and attention-based compression yields gains across diverse tasks.
PDF02October 3, 2025