RiddleBench:面向大语言模型的全新生成式推理基准测试
RiddleBench: A New Generative Reasoning Benchmark for LLMs
October 28, 2025
作者: Deepon Halder, Alan Saji, Thanmay Jayakumar, Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre
cs.AI
摘要
大型语言模型在众多成熟推理基准测试中展现出强劲性能。然而这些基准主要评估定量问题求解等结构化技能,对衡量人类智能核心的灵活多维度推理能力存在空白。这类能力需要将逻辑推理与空间感知、约束满足进行整合,而现有评估体系难以有效测评。为此,我们推出RiddleBench——一个包含1,737道英文挑战性谜题的基准测试集,旨在探究这些核心推理能力。当前顶尖模型在RiddleBench上的评估结果暴露出根本性缺陷:即便是Gemini 2.5 Pro、o3和Claude 4 Sonnet等顶级专有模型,准确率也仅略超60%(分别为60.30%、63.37%和63.16%)。深度分析进一步揭示了系统性故障,包括幻觉级联(采信其他模型的错误推理)以及因强烈自我确认偏见导致的纠错能力薄弱。这些模型的推理过程亦显脆弱,当约束条件重排或引入无关信息时,性能会出现显著下滑。RiddleBench既可作为诊断这些问题的检测工具,也能为开发更稳健可靠的语言模型提供指导资源。
English
Large Language Models have demonstrated strong performance on many
established reasoning benchmarks. However, these benchmarks primarily evaluate
structured skills like quantitative problem-solving, leaving a gap in assessing
flexible, multifaceted reasoning abilities that are central to human
intelligence. These abilities require integrating logical deduction with
spatial awareness and constraint satisfaction, which current evaluations do not
measure well. To address this, we introduce RiddleBench, a benchmark of 1,737
challenging puzzles in English designed to probe these core reasoning
capabilities. Evaluation of state-of-the-art models on RiddleBench shows
fundamental weaknesses. Even top proprietary models like Gemini 2.5 Pro, o3,
and Claude 4 Sonnet achieve accuracy just above 60% (60.30%, 63.37%, and
63.16%). Analysis further reveals deep failures, including hallucination
cascades (accepting flawed reasoning from other models) and poor
self-correction due to a strong self-confirmation bias. Their reasoning is also
fragile, with performance degrading significantly when constraints are
reordered or irrelevant information is introduced. RiddleBench functions as a
diagnostic tool for these issues and as a resource for guiding the development
of more robust and reliable language models.