ChatPaper.aiChatPaper

CLUE:基于隐状态聚类的经验驱动非参数验证

CLUE: Non-parametric Verification from Experience via Hidden-State Clustering

October 2, 2025
作者: Zhenwen Liang, Ruosen Li, Yujun Zhou, Linfeng Song, Dian Yu, Xinya Du, Haitao Mi, Dong Yu
cs.AI

摘要

评估大型语言模型(LLM)输出的质量面临着一项关键挑战。以往的方法要么依赖于文本层面的信息(如奖励模型、多数投票),这些方法可能过度拟合表面线索;要么依赖于基于标记概率校准的置信度,这在未充分校准的模型上会失效。然而,这两类信号实际上都是对更丰富信息来源的部分映射:模型内部的隐藏状态。靠近标记嵌入的早期层保留了支撑文本判断的语义和词汇特征,而后期层则逐渐与输出逻辑值对齐,蕴含了与置信度相关的信息。本文直接探索隐藏状态,将其作为验证的统一基础。我们证明,解决方案的正确性被编码为隐藏激活轨迹中几何上可分离的特征。为验证这一点,我们提出了CLUE(基于聚类与经验的验证),一个刻意保持简约、非参数化的验证器。CLUE无需可训练参数,仅通过隐藏状态变化总结每个推理轨迹,并依据与由过往经验形成的“成功”和“失败”集群的最近质心距离来分类正确性。此方法的简洁性凸显了基础信号的强大。实证表明,CLUE在重新排序候选答案时,持续超越LLM作为评判基准的表现,与现代基于置信度的方法相当或更优,在AIME 24/25和GPQA数据集上均提升了Top-1和多数投票的准确率。尤为突出的是,在AIME 24数据集上,使用1.5B模型时,CLUE将准确率从56.7%(多数@64)提升至70.0%(Top-maj@16)。
English
Assessing the quality of Large Language Model (LLM) outputs presents a critical challenge. Previous methods either rely on text-level information (e.g., reward models, majority voting), which can overfit to superficial cues, or on calibrated confidence from token probabilities, which would fail on less-calibrated models. Yet both of these signals are, in fact, partial projections of a richer source of information: the model's internal hidden states. Early layers, closer to token embeddings, preserve semantic and lexical features that underpin text-based judgments, while later layers increasingly align with output logits, embedding confidence-related information. This paper explores hidden states directly as a unified foundation for verification. We show that the correctness of a solution is encoded as a geometrically separable signature within the trajectory of hidden activations. To validate this, we present Clue (Clustering and Experience-based Verification), a deliberately minimalist, non-parametric verifier. With no trainable parameters, CLUE only summarizes each reasoning trace by an hidden state delta and classifies correctness via nearest-centroid distance to ``success'' and ``failure'' clusters formed from past experience. The simplicity of this method highlights the strength of the underlying signal. Empirically, CLUE consistently outperforms LLM-as-a-judge baselines and matches or exceeds modern confidence-based methods in reranking candidates, improving both top-1 and majority-vote accuracy across AIME 24/25 and GPQA. As a highlight, on AIME 24 with a 1.5B model, CLUE boosts accuracy from 56.7% (majority@64) to 70.0% (top-maj@16).
PDF221October 3, 2025