RiddleBench:面向大型语言模型的全新生成式推理基准测试集
RiddleBench: A New Generative Reasoning Benchmark for LLMs
October 28, 2025
作者: Deepon Halder, Alan Saji, Thanmay Jayakumar, Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre
cs.AI
摘要
大型语言模型在众多成熟推理基准测试中展现出强劲性能。然而这些基准主要评估定量问题求解等结构化技能,对衡量人类智能核心所需的灵活、多维度推理能力存在评估空白。此类能力要求整合逻辑演绎、空间感知与约束满足,而当前评估体系难以有效测度。为此,我们推出RiddleBench——一个包含1,737道英文谜题的基准测试集,旨在探测这些核心推理能力。在RiddleBench上对前沿模型的评估揭示了根本性缺陷:即便顶尖专有模型如Gemini 2.5 Pro、o3和Claude 4 Sonnet,其准确率也仅略超60%(分别为60.30%、63.37%和63.16%)。深度分析进一步暴露出模型存在幻觉级联(接受其他模型的错误推理)以及因强烈自我确认偏见导致的纠错能力低下等深层问题。其推理过程亦显脆弱,当约束条件重排或引入无关信息时,性能会出现显著退化。RiddleBench既可作为诊断这些问题的检测工具,也能为开发更稳健可靠的语言模型提供指引资源。
English
Large Language Models have demonstrated strong performance on many
established reasoning benchmarks. However, these benchmarks primarily evaluate
structured skills like quantitative problem-solving, leaving a gap in assessing
flexible, multifaceted reasoning abilities that are central to human
intelligence. These abilities require integrating logical deduction with
spatial awareness and constraint satisfaction, which current evaluations do not
measure well. To address this, we introduce RiddleBench, a benchmark of 1,737
challenging puzzles in English designed to probe these core reasoning
capabilities. Evaluation of state-of-the-art models on RiddleBench shows
fundamental weaknesses. Even top proprietary models like Gemini 2.5 Pro, o3,
and Claude 4 Sonnet achieve accuracy just above 60% (60.30%, 63.37%, and
63.16%). Analysis further reveals deep failures, including hallucination
cascades (accepting flawed reasoning from other models) and poor
self-correction due to a strong self-confirmation bias. Their reasoning is also
fragile, with performance degrading significantly when constraints are
reordered or irrelevant information is introduced. RiddleBench functions as a
diagnostic tool for these issues and as a resource for guiding the development
of more robust and reliable language models.