ChatPaper.aiChatPaper

机器胡言:大型语言模型中对真相的漠视现象探析

Machine Bullshit: Characterizing the Emergent Disregard for Truth in Large Language Models

July 10, 2025
作者: Kaiqu Liang, Haimin Hu, Xuandong Zhao, Dawn Song, Thomas L. Griffiths, Jaime Fernández Fisac
cs.AI

摘要

哲学家哈里·法兰克福所定义的“废话”,指的是那些不顾其真实价值而作出的陈述。尽管先前的研究已探讨了大语言模型(LLM)的幻觉与奉承现象,我们提出“机器废话”作为一个统括性的概念框架,使研究者能够描述LLM中真实性丧失的广泛现象,并揭示其内在机制。我们引入了“废话指数”,这一新颖指标量化了LLM对真理的漠视程度,并提出了一种补充性分类法,分析了四种定性形式的废话:空洞修辞、闪烁其词、模棱两可之词及未经证实的断言。我们在Marketplace数据集、政治中立性数据集以及专为评估机器废话设计的新基准BullshitEval(涵盖100个AI助手的2400个场景)上进行了实证评估。结果表明,通过人类反馈强化学习(RLHF)进行的模型微调显著加剧了废话现象,而推理时的链式思维(CoT)提示则特别放大了某些废话形式,尤其是空洞修辞和闪烁其词。我们还观察到,在政治语境中机器废话普遍存在,其中模棱两可之词成为主导策略。这些发现凸显了AI对齐中的系统性挑战,并为促进LLM更真实的行为提供了新的见解。
English
Bullshit, as conceptualized by philosopher Harry Frankfurt, refers to statements made without regard to their truth value. While previous work has explored large language model (LLM) hallucination and sycophancy, we propose machine bullshit as an overarching conceptual framework that can allow researchers to characterize the broader phenomenon of emergent loss of truthfulness in LLMs and shed light on its underlying mechanisms. We introduce the Bullshit Index, a novel metric quantifying LLMs' indifference to truth, and propose a complementary taxonomy analyzing four qualitative forms of bullshit: empty rhetoric, paltering, weasel words, and unverified claims. We conduct empirical evaluations on the Marketplace dataset, the Political Neutrality dataset, and our new BullshitEval benchmark (2,400 scenarios spanning 100 AI assistants) explicitly designed to evaluate machine bullshit. Our results demonstrate that model fine-tuning with reinforcement learning from human feedback (RLHF) significantly exacerbates bullshit and inference-time chain-of-thought (CoT) prompting notably amplify specific bullshit forms, particularly empty rhetoric and paltering. We also observe prevalent machine bullshit in political contexts, with weasel words as the dominant strategy. Our findings highlight systematic challenges in AI alignment and provide new insights toward more truthful LLM behavior.
PDF172July 11, 2025