前沿计算机科学:进化智能面临的演进挑战
FrontierCS: Evolving Challenges for Evolving Intelligence
December 17, 2025
作者: Qiuyang Mang, Wenhao Chai, Zhifei Li, Huanzhi Mao, Shang Zhou, Alexander Du, Hanchen Li, Shu Liu, Edwin Chen, Yichuan Wang, Xieting Chu, Zerui Cheng, Yuan Xu, Tian Xia, Zirui Wang, Tianneng Shi, Jianzhu Yao, Yilong Zhao, Qizheng Zhang, Charlie Ruan, Zeyu Shen, Kaiyuan Liu, Runyuan He, Dong Xing, Zerui Li, Zirong Zeng, Yige Jiang, Lufeng Cheng, Ziyi Zhao, Youran Sun, Wesley Zheng, Meiyuwang Zhang, Ruyi Ji, Xuechang Tu, Zihan Zheng, Zexing Chen, Kangyang Zhou, Zhaozi Wang, Jingbang Chen, Aleksandra Korolova, Peter Henderson, Pramod Viswanath, Vijay Ganesh, Saining Xie, Zhuang Liu, Dawn Song, Sewon Min, Ion Stoica, Joseph E. Gonzalez, Jingbo Shang, Alvin Cheung
cs.AI
摘要
我们推出FrontierCS基准测试集,该集合包含156个涵盖计算机科学各领域的开放式问题,由包括计算机科学博士、顶尖竞技编程选手与命题专家在内的团队设计并审核。与现有聚焦已知最优解任务的基准不同,FrontierCS针对的是最优解未知但解决方案质量可客观评估的难题。模型需通过实现可执行程序(而非直接输出答案)来求解这些问题。该基准包含两类问题:一类是常为NP难问题的竞技编程变体题,采用客观部分评分机制;另一类是具有相同特性的研究型问题。每个问题均配备专家参考解决方案和自动评估器。通过融合开放式设计、可量化进展与专家评审机制,FrontierCS构建了处于计算机科学难度前沿的评估基准。实证研究表明:在算法与研究双轨任务中,前沿推理模型仍远落后于人类专家;单纯增加推理预算无法弥合这一差距;模型常过度追求生成勉强可运行的代码,而非探索高质量算法与系统设计。
English
We introduce FrontierCS, a benchmark of 156 open-ended problems across diverse areas of computer science, designed and reviewed by experts, including CS PhDs and top-tier competitive programming participants and problem setters. Unlike existing benchmarks that focus on tasks with known optimal solutions, FrontierCS targets problems where the optimal solution is unknown, but the quality of a solution can be objectively evaluated. Models solve these tasks by implementing executable programs rather than outputting a direct answer. FrontierCS includes algorithmic problems, which are often NP-hard variants of competitive programming problems with objective partial scoring, and research problems with the same property. For each problem we provide an expert reference solution and an automatic evaluator. Combining open-ended design, measurable progress, and expert curation, FrontierCS provides a benchmark at the frontier of computer-science difficulty. Empirically, we find that frontier reasoning models still lag far behind human experts on both the algorithmic and research tracks, that increasing reasoning budgets alone does not close this gap, and that models often over-optimize for generating merely workable code instead of discovering high-quality algorithms and system designs.