AutoCodeBench:大型語言模型作為自動化代碼基準生成器
AutoCodeBench: Large Language Models are Automatic Code Benchmark Generators
August 12, 2025
作者: Jason Chou, Ao Liu, Yuchi Deng, Zhiying Zeng, Tao Zhang, Haotian Zhu, Jianwei Cai, Yue Mao, Chenchen Zhang, Lingyun Tan, Ziyan Xu, Bohui Zhai, Hengyi Liu, Speed Zhu, Wiggin Zhou, Fengzong Lian
cs.AI
摘要
大型語言模型(LLMs)在多個領域展現了卓越的能力,其中代碼生成已成為一個關鍵的研究焦點。儘管已有眾多基準被提出來評估其代碼生成能力,這些基準仍面臨幾項關鍵限制。首先,它們通常依賴於人工註釋,這既耗時又難以在不同編程語言和問題複雜度之間擴展。其次,現有基準大多集中於Python,而少數多語言基準則存在難度有限且語言分佈不均的問題。為應對這些挑戰,我們提出了AutoCodeGen,這是一種無需人工註釋即可生成高難度多語言代碼生成數據集的自動化方法。AutoCodeGen通過利用LLMs生成測試輸入並通過多語言沙箱獲取測試輸出,確保了測試案例的正確性和完整性,同時通過逆向問題生成和多步過濾實現了高數據質量。基於這一新方法,我們引入了AutoCodeBench,這是一個包含3,920個問題、均勻分佈於20種編程語言的大規模代碼生成基準,專門設計用於評估LLMs在具有挑戰性、多樣性及實用性的多語言任務上的表現。我們在AutoCodeBench及其簡化版AutoCodeBench-Lite上評估了超過30個領先的開源和專有LLMs。結果顯示,即便是最先進的LLMs也難以應對這些任務的複雜性、多樣性及多語言特性。此外,我們還推出了專為基礎模型設計的AutoCodeBench-Complete,以評估其少樣本代碼生成能力。我們希望AutoCodeBench系列能成為一項寶貴資源,激勵社區關注更具挑戰性和實用性的多語言代碼生成場景。
English
Large Language Models (LLMs) have demonstrated remarkable capabilities across
various domains, with code generation emerging as a key area of focus. While
numerous benchmarks have been proposed to evaluate their code generation
abilities, these benchmarks face several critical limitations. First, they
often rely on manual annotations, which are time-consuming and difficult to
scale across different programming languages and problem complexities. Second,
most existing benchmarks focus primarily on Python, while the few multilingual
benchmarks suffer from limited difficulty and uneven language distribution. To
address these challenges, we propose AutoCodeGen, an automated method for
generating high-difficulty multilingual code generation datasets without manual
annotations. AutoCodeGen ensures the correctness and completeness of test cases
by generating test inputs with LLMs and obtaining test outputs through a
multilingual sandbox, while achieving high data quality through reverse-order
problem generation and multiple filtering steps. Using this novel method, we
introduce AutoCodeBench, a large-scale code generation benchmark comprising
3,920 problems evenly distributed across 20 programming languages. It is
specifically designed to evaluate LLMs on challenging, diverse, and practical
multilingual tasks. We evaluate over 30 leading open-source and proprietary
LLMs on AutoCodeBench and its simplified version AutoCodeBench-Lite. The
results show that even the most advanced LLMs struggle with the complexity,
diversity, and multilingual nature of these tasks. Besides, we introduce
AutoCodeBench-Complete, specifically designed for base models to assess their
few-shot code generation capabilities. We hope the AutoCodeBench series will
serve as a valuable resource and inspire the community to focus on more
challenging and practical multilingual code generation scenarios.