从被动度量到主动信号:不确定性量化在大语言模型中的角色演进
From Passive Metric to Active Signal: The Evolving Role of Uncertainty Quantification in Large Language Models
January 22, 2026
作者: Jiaxin Zhang, Wendi Cui, Zhuohang Li, Lifu Huang, Bradley Malin, Caiming Xiong, Chien-Sheng Wu
cs.AI
摘要
尽管大语言模型展现出卓越能力,但其不可靠性仍是部署于高风险领域的关键障碍。本综述描绘了应对这一挑战的功能演进路径:不确定性从被动诊断指标演变为指导实时模型行为的主动控制信号。我们通过三大前沿领域展示不确定性如何作为主动控制信号发挥作用:在高级推理中优化计算并触发自我修正;在自主智能体中调控工具使用与信息获取的元认知决策;在强化学习中抑制奖励破解并通过内在奖励实现自我改进。通过将上述进展锚定于贝叶斯方法和 conformal 预测等新兴理论框架,我们为这一变革性趋势提供了统一视角。本综述通过全面梳理、批判性分析与实用设计模式论证指出:掌握不确定性这一新趋势对于构建可扩展、可靠、可信的新一代人工智能至关重要。
English
While Large Language Models (LLMs) show remarkable capabilities, their unreliability remains a critical barrier to deployment in high-stakes domains. This survey charts a functional evolution in addressing this challenge: the evolution of uncertainty from a passive diagnostic metric to an active control signal guiding real-time model behavior. We demonstrate how uncertainty is leveraged as an active control signal across three frontiers: in advanced reasoning to optimize computation and trigger self-correction; in autonomous agents to govern metacognitive decisions about tool use and information seeking; and in reinforcement learning to mitigate reward hacking and enable self-improvement via intrinsic rewards. By grounding these advancements in emerging theoretical frameworks like Bayesian methods and Conformal Prediction, we provide a unified perspective on this transformative trend. This survey provides a comprehensive overview, critical analysis, and practical design patterns, arguing that mastering the new trend of uncertainty is essential for building the next generation of scalable, reliable, and trustworthy AI.