ChatPaper.aiChatPaper

大语言模型短答与长答间事实一致性之谜

The Curious Case of Factual (Mis)Alignment between LLMs' Short- and Long-Form Answers

October 13, 2025
作者: Saad Obaid ul Islam, Anne Lauscher, Goran Glavaš
cs.AI

摘要

大型语言模型(LLMs)能够准确回答“爱因斯坦何时出生?”这样的问题,但在撰写关于爱因斯坦生平的文本时却无法提供相同的日期,这揭示了模型在处理不同复杂度任务时访问事实知识的基本不一致性。尽管模型在事实问答基准测试中展现出令人印象深刻的准确性,但简单查询与复杂查询之间的可靠性差距仍未被充分理解,这削弱了其可信度。在本研究中,我们引入了短长形式对齐的事实问答评估框架(SLAQ),该框架通过对比LLMs对同一事实问题在(a)孤立提问(短形式)与(b)融入复杂查询(长形式)下的回答,进行受控评估。通过对16个LLMs在600个查询上的分析,我们发现模型对相应短长查询的回答存在系统性不对齐现象。进一步,我们揭示了位置依赖的准确性损失及动量效应,即连续正确或错误的回答会形成自我强化的模式。通过机制分析,我们发现对齐的事实会激活模型内部的重叠区域,且基于机制相似性的指标能以高达78%的准确率预测短长回答的对齐情况。我们的工作确立了查询复杂度上的事实一致性作为LLMs可信度的重要方面,并对当前评估实践提出了挑战,这些实践隐含地假设模型在简单事实查询上的良好表现意味着其在更复杂的知识寻求任务中同样可靠。
English
Large language models (LLMs) can correctly answer "When was Einstein born?" yet fail to provide the same date when writing about Einstein's life revealing a fundamental inconsistency in how models access factual knowledge across task complexities. While models display impressive accuracy on factual question-answering benchmarks, the reliability gap between simple and complex queries remains poorly understood, eroding their trustworthiness. In this work, we introduce Short-Long Form Alignment for Factual Question Answering (SLAQ), a controlled evaluation framework that compares LLMs' answers to the same factual questions asked (a) in isolation (short) vs. (b) integrated into complex queries (long). Looking at 16 LLMs across 600 queries, we find a systematic misalignment of answers to the corresponding short and long queries. We further uncover position-dependent accuracy loss and momentum effects where consecutive correct or incorrect answers create self-reinforcing patterns. Through mechanistic analysis, we find that aligned facts activate overlapping model internals, and that metrics based on mechanistic similarity can predict short-long answer alignment with up to 78% accuracy. Our work establishes factual consistency over query complexity as an important aspect of LLMs' trustworthiness and challenges current evaluation practices, which implicitly assume that good performance for simple factual queries implies reliability in more complex knowledge-seeking tasks too.
PDF02October 14, 2025