手动解码的终结:迈向真正端到端的语言模型
The End of Manual Decoding: Towards Truly End-to-End Language Models
October 30, 2025
作者: Zhichao Wang, Dongyang Ma, Xinting Huang, Deng Cai, Tian Lan, Jiahao Xu, Haitao Mi, Xiaoying Tang, Yan Wang
cs.AI
摘要
将大语言模型称为"端到端"系统实为误称。实践中,它们依赖不可微的解码过程,需要费力地手动调整温度参数和top-p等超参数。本文提出AutoDeco新架构,通过让模型学习自主控制解码策略,实现真正的"端到端"生成。我们在标准Transformer基础上增加轻量级预测头,使其能在每个生成步骤中动态预测上下文相关的温度值与top-p参数,同时输出下一个词元的逻辑值。这种方法将解码转化为参数化的词元级过程,使模型能在单次前向传播中自主调节采样策略。
通过在八个基准测试上的大量实验,我们证明AutoDeco不仅显著优于默认解码策略,其性能甚至可比肩通过"测试集调参"得出的先知优化基线——这是任何静态方法的实际性能上限。更重要的是,我们发现了基于指令的解码控制新兴能力:模型能理解自然语言指令(如"低随机性生成"),并基于词元粒度调整预测的温度值和top-p参数,这为可引导交互式大语言模型解码开辟了新范式。
English
The "end-to-end" label for LLMs is a misnomer. In practice, they depend on a
non-differentiable decoding process that requires laborious, hand-tuning of
hyperparameters like temperature and top-p. This paper introduces AutoDeco, a
novel architecture that enables truly "end-to-end" generation by learning to
control its own decoding strategy. We augment the standard transformer with
lightweight heads that, at each step, dynamically predict context-specific
temperature and top-p values alongside the next-token logits. This approach
transforms decoding into a parametric, token-level process, allowing the model
to self-regulate its sampling strategy within a single forward pass.
Through extensive experiments on eight benchmarks, we demonstrate that
AutoDeco not only significantly outperforms default decoding strategies but
also achieves performance comparable to an oracle-tuned baseline derived from
"hacking the test set"-a practical upper bound for any static method.
Crucially, we uncover an emergent capability for instruction-based decoding
control: the model learns to interpret natural language commands (e.g.,
"generate with low randomness") and adjusts its predicted temperature and top-p
on a token-by-token basis, opening a new paradigm for steerable and interactive
LLM decoding.