大型语言模型能否预测自身失误?通过内部电路实现自我认知
Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits
December 23, 2025
作者: Amirhosein Ghasemabadi, Di Niu
cs.AI
摘要
大型语言模型(LLMs)能够生成流畅复杂的输出,却往往无法识别自身的错误与幻觉。现有方法通常依赖外部评判器、多样本一致性或基于文本的自我批判,这些方式要么增加计算开销,要么与真实正确性关联微弱。我们提出核心问题:LLMs能否通过推理过程中对内部状态的监测来预测自身失败?我们推出Gnosis——一种轻量级自我感知机制,使冻结参数的大型语言模型能够通过解码隐藏状态和注意力模式的信号,实现内在的自我验证。Gnosis被动观察内部轨迹,将其压缩为固定预算的描述符,并以可忽略的推理成本预测正确性,仅增加约500万参数且运算独立于序列长度。在数学推理、开放域问答和学术知识基准测试中,针对1.7B至20B参数的冻结模型骨干,Gnosis在准确性和校准度上持续超越强内部基线及大型外部评判器。此外,该机制能零样本泛化至部分生成结果,实现错误轨迹的早期检测与计算感知控制。这些结果表明,可靠的正确性线索内生于生成过程,且无需外部监督即可高效提取。
English
Large language models (LLMs) generate fluent and complex outputs but often fail to recognize their own mistakes and hallucinations. Existing approaches typically rely on external judges, multi-sample consistency, or text-based self-critique, which incur additional compute or correlate weakly with true correctness. We ask: can LLMs predict their own failures by inspecting internal states during inference? We introduce Gnosis, a lightweight self-awareness mechanism that enables frozen LLMs to perform intrinsic self-verification by decoding signals from hidden states and attention patterns. Gnosis passively observes internal traces, compresses them into fixed-budget descriptors, and predicts correctness with negligible inference cost, adding only ~5M parameters and operating independently of sequence length. Across math reasoning, open-domain question answering, and academic knowledge benchmarks, and over frozen backbones ranging from 1.7B to 20B parameters, Gnosis consistently outperforms strong internal baselines and large external judges in both accuracy and calibration. Moreover, it generalizes zero-shot to partial generations, enabling early detection of failing trajectories and compute-aware control. These results show that reliable correctness cues are intrinsic to generation process and can be extracted efficiently without external supervision.