ChatPaper.aiChatPaper

通用365:跨多样化高难度任务评估大型语言模型的通用推理能力

General365: Benchmarking General Reasoning in Large Language Models Across Diverse and Challenging Tasks

April 13, 2026
作者: Junlin Liu, Shengnan An, Shuang Zhou, Dan Ma, Shixiong Luo, Ying Xie, Yuan Zhang, Wenling Yuan, Yifan Zhou, Xiaoyu Li, Ziwen Wang, Xuezhi Cao, Xunliang Cai
cs.AI

摘要

当代大型语言模型(LLMs)在数学、物理等专业领域已展现出卓越的推理能力,然而这些推理技能向更广泛通用场景的迁移能力——即通用推理——仍待深入探索。与领域特定推理不同,通用推理对专家知识依赖较低,但依然面临复杂约束条件、嵌套逻辑分支和语义干扰等严峻挑战。为填补这一研究空白,我们推出General365基准测试,专门用于评估LLMs的通用推理能力。通过将背景知识限定在K-12水平,该基准明确实现了推理能力与专业知识的解耦。该数据集包含八大类别的365道种子问题与1095道变体问题,兼具高难度与多样性。对26个主流LLMs的评估表明,即使最优模型准确率也仅达62.8%,与LLMs在数理基准测试中接近完美的表现形成鲜明对比。这些结果揭示当前LLMs的推理能力具有显著的领域依赖性,在更广泛的应用场景中仍有巨大提升空间。我们期待General365能推动LLMs推理能力突破领域限制,向具有鲁棒性的通用现实场景迈进。代码、数据集及排行榜:https://general365.github.io
English
Contemporary large language models (LLMs) have demonstrated remarkable reasoning capabilities, particularly in specialized domains like mathematics and physics. However, their ability to generalize these reasoning skills to more general and broader contexts--often termed general reasoning--remains under-explored. Unlike domain-specific reasoning, general reasoning relies less on expert knowledge but still presents formidable reasoning challenges, such as complex constraints, nested logical branches, and semantic interference. To address this gap, we introduce General365, a benchmark specifically designed to assess general reasoning in LLMs. By restricting background knowledge to a K-12 level, General365 explicitly decouples reasoning from specialized expertise. The benchmark comprises 365 seed problems and 1,095 variant problems across eight categories, ensuring both high difficulty and diversity. Evaluations across 26 leading LLMs reveal that even the top-performing model achieves only 62.8% accuracy, in stark contrast to the near-perfect performances of LLMs in math and physics benchmarks. These results suggest that the reasoning abilities of current LLMs are heavily domain-dependent, leaving significant room for improvement in broader applications. We envision General365 as a catalyst for advancing LLM reasoning beyond domain-specific tasks toward robust, general-purpose real-world scenarios. Code, Dataset, and Leaderboard: https://general365.github.io
PDF61April 15, 2026