ChatPaper.aiChatPaper

推理缓存:通过短视域强化学习实现长视域持续优化

Reasoning Cache: Continual Improvement Over Long Horizons via Short-Horizon RL

February 3, 2026
作者: Ian Wu, Yuxiao Qu, Amrith Setlur, Aviral Kumar
cs.AI

摘要

能够突破训练预算限制持续改进的大语言模型(LLM),可通过测试时自适应解决日益复杂的问题,这一特性我们称之为外推能力。然而,标准强化学习(RL)在固定问题分布和训练预算下运行,限制了模型在测试时面对分布变化时的外推能力。为此,我们提出RC算法——一种在训练和推理阶段替代标准自回归解码的迭代解码方法。该算法利用LLM在应答生成与摘要归纳能力上的不对称性,构建跨迭代持续优化的推理链。经RC训练后的模型可实现外推,其推理视野的持续改进能力可超越训练所见范围一个数量级以上。实证表明:使用16k词元训练预算的40亿参数模型配合RC算法,在测试时消耗50万词元即可将HMMT 2025任务表现从40%提升至近70%,优于同规模模型及多数大型推理LLM。最后我们还发现,由于训练获得的摘要条件生成能力得到增强,经RC训练的模型能更有效地利用现有框架进一步扩展测试时性能。
English
Large Language Models (LLMs) that can continually improve beyond their training budgets are able to solve increasingly difficult problems by adapting at test time, a property we refer to as extrapolation. However, standard reinforcement learning (RL) operates over fixed problem distributions and training budgets, which limits extrapolation amidst distribution shift at test time. To address this, we introduce RC, an iterative decoding algorithm that replaces standard autoregressive decoding during both training and inference. RC exploits an asymmetry between the response generation and summarization capabilities of LLMs to construct reasoning chains that consistently improve across iterations. Models trained to use RC can extrapolate and continually improve over reasoning horizons more than an order of magnitude longer than those seen during training. Empirically, training a 4B model with RC using a 16k-token training budget improves performance on HMMT 2025 from 40% to nearly 70% with 0.5m tokens at test time, outperforming both comparably sized models and many larger reasoning LLMs. Finally, we also show that models trained with RC can more effectively leverage existing scaffolds to further scale test-time performance, due to the improved summary-conditioned generation abilities learned through training.
PDF22February 13, 2026