nabla-Reasoner:基于潜在空间测试时梯度下降的大语言模型推理方法
nabla-Reasoner: LLM Reasoning via Test-Time Gradient Descent in Latent Space
March 5, 2026
作者: Peihao Wang, Ruisi Cai, Zhen Wang, Hongyuan Mei, Qiang Liu, Pan Li, Zhangyang Wang
cs.AI
摘要
随着大语言模型推理时计算资源的扩展,其推理能力实现了突破性进展。然而,现有的推理时扩展方法通常依赖低效次优的离散搜索算法或试错式提示策略来优化在线决策。本文提出nabla-Reasoner——一种将词元逻辑的可微分优化融入解码循环的迭代生成框架,实现策略的动态优化。其核心组件可微分文本优化通过融合大语言模型似然度与奖励模型的梯度信号,实现对文本表征的精细化调整。该框架进一步结合拒绝采样与加速设计,以增强解码鲁棒性并提升速度。理论分析表明,在样本空间执行推理时梯度下降以最大化奖励的行为,与通过KL正则化强化学习对齐大语言模型策略具有对偶性。实验证明,在具有挑战性的数学推理基准测试中,nabla-Reasoner的准确率提升超20%,同时相较强基线模型调用次数减少约10-40%。本研究实现了从零阶搜索到一阶优化的范式转变,为增强大语言模型推理能力提供了高性价比路径。
English
Scaling inference-time compute for Large Language Models (LLMs) has unlocked unprecedented reasoning capabilities. However, existing inference-time scaling methods typically rely on inefficient and suboptimal discrete search algorithms or trial-and-error prompting to improve the online policy. In this paper, we propose nabla-Reasoner, an iterative generation framework that integrates differentiable optimization over token logits into the decoding loop to refine the policy on the fly. Our core component, Differentiable Textual Optimization (DTO), leverages gradient signals from both the LLM's likelihood and a reward model to refine textual representations. nabla-Reasoner further incorporates rejection sampling and acceleration design to robustify and speed up decoding. Theoretically, we show that performing inference-time gradient descent in the sample space to maximize reward is dual to aligning an LLM policy via KL-regularized reinforcement learning. Empirically, nabla-Reasoner achieves over 20% accuracy improvement on a challenging mathematical reasoning benchmark, while reducing number of model calls by approximately 10-40% compared to strong baselines. Overall, our work introduces a paradigm shift from zeroth-order search to first-order optimization at test time, offering a cost-effective path to amplify LLM reasoning.