BOW:瓶颈式下一词探索
BOW: Bottlenecked Next Word Exploration
June 16, 2025
作者: Ming Shen, Zhikun Xu, Xiao Ye, Jacob Dineen, Ben Zhou
cs.AI
摘要
大型語言模型(LLMs)通常通過下一詞預測(NWP)進行訓練,這種方法提供了強大的表面流暢性,但往往缺乏對穩健推理的支持。我們提出了瓶頸式下一詞探索(BOW),這是一種新穎的強化學習框架,它通過引入一個推理瓶頸來重新思考NWP,其中策略模型首先生成一個推理路徑,而不是直接預測下一個詞元,然後一個凍結的評判模型僅基於此推理路徑來預測下一個詞元的分布。我們使用GRPO訓練策略模型,並以量化推理路徑如何有效促進下一詞恢復的獎勵來進行訓練。與其他持續預訓練基線相比,我們展示了BOW在多種基準測試中提升了基礎模型的通用和下一詞推理能力。我們的研究結果表明,BOW可以作為一種有效且可擴展的替代方案,取代傳統的NWP。
English
Large language models (LLMs) are typically trained via next-word prediction
(NWP), which provides strong surface-level fluency but often lacks support for
robust reasoning. We propose BOttlenecked next Word exploration (BOW), a novel
RL framework that rethinks NWP by introducing a reasoning bottleneck where a
policy model first generates a reasoning path rather than predicting the next
token directly, after which a frozen judge model predicts the next token
distribution based solely on this reasoning path. We train the policy model
using GRPO with rewards that quantify how effectively the reasoning path
facilitates next-word recovery. Compared with other continual pretraining
baselines, we show that BOW improves both the general and next-word reasoning
capabilities of the base model, evaluated on various benchmarks. Our findings
show that BOW can serve as an effective and scalable alternative to vanilla
NWP.