ChatPaper.aiChatPaper

線索:基於隱藏狀態聚類的經驗非參數驗證

CLUE: Non-parametric Verification from Experience via Hidden-State Clustering

October 2, 2025
作者: Zhenwen Liang, Ruosen Li, Yujun Zhou, Linfeng Song, Dian Yu, Xinya Du, Haitao Mi, Dong Yu
cs.AI

摘要

評估大型語言模型(LLM)輸出的質量面臨著一個關鍵挑戰。以往的方法要么依賴於文本層面的信息(例如獎勵模型、多數投票),這可能過度擬合於表面線索,要么依賴於從詞元概率中校準的置信度,這對於未經充分校準的模型則會失效。然而,這兩種信號實際上都是對更豐富信息源的部分投影:模型的內部隱藏狀態。靠近詞元嵌入的早期層保留了支撐基於文本判斷的語義和詞彙特徵,而後期層則越來越多地與輸出邏輯值對齊,嵌入與置信度相關的信息。本文直接探索隱藏狀態作為驗證的統一基礎。我們展示了解的正確性被編碼為隱藏激活軌跡中的幾何可分離特徵。為驗證這一點,我們提出了Clue(基於聚類和經驗的驗證),這是一個刻意簡化的非參數化驗證器。Clue沒有可訓練參數,僅通過隱藏狀態的變化來總結每個推理軌跡,並通過與過去經驗形成的“成功”和“失敗”聚類的最近質心距離來分類正確性。該方法的簡潔性凸顯了基礎信號的強度。實證表明,Clue在重新排序候選方案時,始終優於LLM作為評判基準,並匹配或超越了現代基於置信度的方法,在AIME 24/25和GPQA上提高了top-1和多數投票的準確性。值得一提的是,在AIME 24上使用1.5B模型時,Clue將準確率從56.7%(多數@64)提升至70.0%(top-maj@16)。
English
Assessing the quality of Large Language Model (LLM) outputs presents a critical challenge. Previous methods either rely on text-level information (e.g., reward models, majority voting), which can overfit to superficial cues, or on calibrated confidence from token probabilities, which would fail on less-calibrated models. Yet both of these signals are, in fact, partial projections of a richer source of information: the model's internal hidden states. Early layers, closer to token embeddings, preserve semantic and lexical features that underpin text-based judgments, while later layers increasingly align with output logits, embedding confidence-related information. This paper explores hidden states directly as a unified foundation for verification. We show that the correctness of a solution is encoded as a geometrically separable signature within the trajectory of hidden activations. To validate this, we present Clue (Clustering and Experience-based Verification), a deliberately minimalist, non-parametric verifier. With no trainable parameters, CLUE only summarizes each reasoning trace by an hidden state delta and classifies correctness via nearest-centroid distance to ``success'' and ``failure'' clusters formed from past experience. The simplicity of this method highlights the strength of the underlying signal. Empirically, CLUE consistently outperforms LLM-as-a-judge baselines and matches or exceeds modern confidence-based methods in reranking candidates, improving both top-1 and majority-vote accuracy across AIME 24/25 and GPQA. As a highlight, on AIME 24 with a 1.5B model, CLUE boosts accuracy from 56.7% (majority@64) to 70.0% (top-maj@16).
PDF221October 3, 2025