LayerCake:大语言模型层级内的令牌感知对比解码
LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers
July 6, 2025
作者: Jingze Zhu, Yongliang Wu, Wenbo Zhu, Jiawang Cao, Yanqiang Zheng, Jiawei Chen, Xu Yang, Bernt Schiele, Jonas Fischer, Xinting Hu
cs.AI
摘要
大型语言模型(LLMs)在自然语言理解和生成方面表现出色,但在处理事实性错误时仍显脆弱,这限制了其在知识密集型任务中的可靠性。尽管解码时策略提供了一种无需训练的高效解决方案,现有方法通常孤立地处理令牌级别和层级信号,忽视了它们之间的联合动态。在本研究中,我们引入了一种令牌感知、层级定位的对比解码方法,该方法将特定类型的令牌与其最具影响力的Transformer层级对齐,以提升事实生成的准确性。通过实证注意力分析,我们识别出两种关键模式:标点符号令牌在早期层级中占据主导注意力,而概念性令牌则在中间层级主导语义推理。通过有选择性地抑制这些令牌类型在各自深度上的注意力,我们实现了受控的事实性退化诱导,并提取对比信号以指导最终的事实解码。我们的方法无需额外训练或模型修改,实验表明,该方法在多个LLMs及各类基准测试中均能持续提升事实准确性。
English
Large language models (LLMs) excel at natural language understanding and
generation but remain vulnerable to factual errors, limiting their reliability
in knowledge-intensive tasks. While decoding-time strategies provide a
promising efficient solution without training, existing methods typically treat
token-level and layer-level signals in isolation, overlooking the joint
dynamics between them. In this work, we introduce a token-aware,
layer-localized contrastive decoding method that aligns specific token types
with their most influential transformer layers to improve factual generation.
Through empirical attention analysis, we identify two key patterns: punctuation
tokens receive dominant attention in early layers, while conceptual tokens
govern semantic reasoning in intermediate layers. By selectively suppressing
attention to these token types at their respective depths, we achieve the
induction of controlled factual degradation and derive contrastive signals to
guide the final factual decoding. Our method requires no additional training or
model modification, and experiments demonstrate that our method consistently
improves factuality across multiple LLMs and various benchmarks.