大型语言模型的代码补全的静态评估
A Static Evaluation of Code Completion by Large Language Models
June 5, 2023
作者: Hantian Ding, Varun Kumar, Yuchen Tian, Zijian Wang, Rob Kwiatkowski, Xiaopeng Li, Murali Krishna Ramanathan, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang
cs.AI
摘要
基于代码训练的大型语言模型展现了提升软件开发人员生产力的巨大潜力。已经提出了几种基于执行的基准测试来评估模型生成代码在简单编程问题上的功能正确性。然而,考虑到执行成本,在复杂的实际项目上执行相同的评估是昂贵的。相反,静态分析工具如代码检查器可以在不运行程序的情况下检测错误,但尚未被广泛用于评估代码生成模型。在这项工作中,我们提出了一个静态评估框架,通过利用抽象语法树来量化Python代码补全中的静态错误。与基于执行的评估相比,我们的方法不仅更高效,而且适用于实际代码。在实验中,我们从开源代码库中收集代码上下文,利用公共模型生成了一百万个函数体。我们的静态分析揭示了未定义名称和未使用变量是语言模型产生的代码中最常见的错误之一。通过广泛研究,我们还展示了采样温度、模型大小和上下文对代码补全中静态错误的影响。
English
Large language models trained on code have shown great potential to increase
productivity of software developers. Several execution-based benchmarks have
been proposed to evaluate functional correctness of model-generated code on
simple programming problems. Nevertheless, it is expensive to perform the same
evaluation on complex real-world projects considering the execution cost. On
the contrary, static analysis tools such as linters, which can detect errors
without running the program, haven't been well explored for evaluating code
generation models. In this work, we propose a static evaluation framework to
quantify static errors in Python code completions, by leveraging Abstract
Syntax Trees. Compared with execution-based evaluation, our method is not only
more efficient, but also applicable to code in the wild. For experiments, we
collect code context from open source repos to generate one million function
bodies using public models. Our static analysis reveals that Undefined Name and
Unused Variable are the most common errors among others made by language
models. Through extensive studies, we also show the impact of sampling
temperature, model size, and context on static errors in code completions.