ChatPaper.aiChatPaper

手動解碼的終結:邁向真正的端到端語言模型

The End of Manual Decoding: Towards Truly End-to-End Language Models

October 30, 2025
作者: Zhichao Wang, Dongyang Ma, Xinting Huang, Deng Cai, Tian Lan, Jiahao Xu, Haitao Mi, Xiaoying Tang, Yan Wang
cs.AI

摘要

大型語言模型的「端到端」標籤實為誤稱。實際應用中,它們依賴於不可微分的解碼過程,需要耗費大量精力手動調整溫度(temperature)和top-p等超參數。本文提出AutoDeco創新架構,通過學習控制自身解碼策略,實現真正的「端到端」生成。我們在標準Transformer架構上增設輕量級預測頭,使其能在每個生成步驟中,除了預測下個詞元的logits外,還能動態推斷上下文相關的溫度與top-p數值。這種方法將解碼轉化為參數化的詞元級過程,使模型能在單次前向傳播中自我調節採樣策略。 透過在八個基準測試上的廣泛實驗,我們證明AutoDeco不僅顯著優於預設解碼策略,更達到與「透過測試集調參」所得的神諭調參基準線(即靜態方法的實際性能上限)相當的性能。關鍵的是,我們發現了基於指令的解碼控制湧現能力:模型能學習解讀自然語言指令(如「低隨機性生成」),並逐詞元調整預測的溫度和top-p數值,為可導向互動式LLM解碼開闢了新範式。
English
The "end-to-end" label for LLMs is a misnomer. In practice, they depend on a non-differentiable decoding process that requires laborious, hand-tuning of hyperparameters like temperature and top-p. This paper introduces AutoDeco, a novel architecture that enables truly "end-to-end" generation by learning to control its own decoding strategy. We augment the standard transformer with lightweight heads that, at each step, dynamically predict context-specific temperature and top-p values alongside the next-token logits. This approach transforms decoding into a parametric, token-level process, allowing the model to self-regulate its sampling strategy within a single forward pass. Through extensive experiments on eight benchmarks, we demonstrate that AutoDeco not only significantly outperforms default decoding strategies but also achieves performance comparable to an oracle-tuned baseline derived from "hacking the test set"-a practical upper bound for any static method. Crucially, we uncover an emergent capability for instruction-based decoding control: the model learns to interpret natural language commands (e.g., "generate with low randomness") and adjusts its predicted temperature and top-p on a token-by-token basis, opening a new paradigm for steerable and interactive LLM decoding.
PDF1145December 2, 2025