CS-Bench:針對大型語言模型的全面基準,以實現對計算機科學的掌握。
CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery
June 12, 2024
作者: Xiaoshuai Song, Muxi Diao, Guanting Dong, Zhengyang Wang, Yujia Fu, Runqi Qiao, Zhexu Wang, Dayuan Fu, Huangxuan Wu, Bin Liang, Weihao Zeng, Yejie Wang, Zhuoma GongQue, Jianing Yu, Qiuna Tan, Weiran Xu
cs.AI
摘要
計算機科學(CS)是人類智慧錯綜複雜的明證,深刻推動了人工智慧和現代社會的發展。然而,當前大型語言模型(LLMs)的社群過於專注於分析特定基礎技能(例如數學和代碼生成)的基準,忽略了對計算機科學領域的全面評估。為彌合這一差距,我們引入了 CS-Bench,這是第一個致力於評估語言模型在計算機科學中表現的雙語(中英文)基準。CS-Bench 包含約 5K 精心策劃的測試樣本,涵蓋計算機科學的 4 個主要領域中的 26 個子領域,包括各種任務形式和知識和推理的分類。利用 CS-Bench,我們對 30 多個主流語言模型進行了全面評估,揭示了計算機科學表現與模型規模之間的關係。我們還定量分析了現有語言模型失敗的原因,並突出了改進方向,包括知識補充和計算機科學特定推理。進一步的跨能力實驗顯示,語言模型在計算機科學方面的能力與其在數學和編碼方面的能力之間存在高度相關性。此外,專門從事數學和編碼的專家語言模型在幾個計算機科學子領域中也表現出色。展望未來,我們期待 CS-Bench 成為語言模型在計算機科學領域應用的基石,開拓評估語言模型多樣推理能力的新途徑。CS-Bench 的數據和評估代碼可在 https://github.com/csbench/csbench 上找到。
English
Computer Science (CS) stands as a testament to the intricacies of human
intelligence, profoundly advancing the development of artificial intelligence
and modern society. However, the current community of large language models
(LLMs) overly focuses on benchmarks for analyzing specific foundational skills
(e.g. mathematics and code generation), neglecting an all-round evaluation of
the computer science field. To bridge this gap, we introduce CS-Bench, the
first bilingual (Chinese-English) benchmark dedicated to evaluating the
performance of LLMs in computer science. CS-Bench comprises approximately 5K
meticulously curated test samples, covering 26 subfields across 4 key areas of
computer science, encompassing various task forms and divisions of knowledge
and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of
over 30 mainstream LLMs, revealing the relationship between CS performance and
model scales. We also quantitatively analyze the reasons for failures in
existing LLMs and highlight directions for improvements, including knowledge
supplementation and CS-specific reasoning. Further cross-capability experiments
show a high correlation between LLMs' capabilities in computer science and
their abilities in mathematics and coding. Moreover, expert LLMs specialized in
mathematics and coding also demonstrate strong performances in several CS
subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM
applications in the CS field and paving new avenues in assessing LLMs' diverse
reasoning capabilities. The CS-Bench data and evaluation code are available at
https://github.com/csbench/csbench.Summary
AI-Generated Summary