ChatPaper.aiChatPaper

CodeEditorBench:评估大型语言模型的代码编辑能力

CodeEditorBench: Evaluating Code Editing Capability of Large Language Models

April 4, 2024
作者: Jiawei Guo, Ziming Li, Xueling Liu, Kaijing Ma, Tianyu Zheng, Zhouliang Yu, Ding Pan, Yizhi LI, Ruibo Liu, Yue Wang, Shuyue Guo, Xingwei Qu, Xiang Yue, Ge Zhang, Wenhu Chen, Jie Fu
cs.AI

摘要

大型语言模型(LLMs)用于代码的应用正在快速发展,代码编辑作为一项关键能力逐渐崭露头角。我们引入了CodeEditorBench,这是一个旨在严格评估LLMs在代码编辑任务中表现的评估框架,包括调试、翻译、优化和需求切换。与现有专注于代码生成的基准不同,CodeEditorBench强调真实世界场景和软件开发的实际方面。我们从五个来源精心策划了各种编码挑战和场景,涵盖多种编程语言、复杂性水平和编辑任务。对19个LLMs的评估显示,封闭源模型(特别是Gemini-Ultra和GPT-4)在CodeEditorBench中胜过开源模型,突显了基于问题类型和提示敏感性的模型性能差异。CodeEditorBench旨在通过提供一个强大的平台来评估代码编辑能力,推动LLMs的进步。我们将发布所有提示和数据集,以便社区扩展数据集并对新兴LLMs进行基准测试。通过引入CodeEditorBench,我们为LLMs在代码编辑方面的发展做出贡献,并为研究人员和从业者提供了宝贵的资源。
English
Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners.

Summary

AI-Generated Summary

PDF181December 15, 2024