大型语言模型代码生成的提示指南:一项实证特征研究
Guidelines to Prompt Large Language Models for Code Generation: An Empirical Characterization
January 19, 2026
作者: Alessandro Midolo, Alessandro Giagnorio, Fiorella Zampetti, Rosalia Tufano, Gabriele Bavota, Massimiliano Di Penta
cs.AI
摘要
大型语言模型(LLM)当前已被广泛应用于各类软件工程任务,其中代码生成是主要应用场景。已有研究表明,恰当的提示工程能够有效帮助开发者优化代码生成提示。然而迄今为止,业界仍缺乏专门指导开发者编写高质量代码生成提示的规范。本研究提出并评估了针对开发场景的提示优化指南。我们首先采用迭代式测试驱动方法自动优化代码生成提示,通过分析优化过程中通过测试的提示改进项,归纳出10项提示优化准则,涉及输入输出规范、前后置条件说明、示例提供、细节补充及模糊点澄清等方面。通过对50名开发者的调研评估,我们发现其在获知本指南前后的实际应用模式与感知效用存在差异。研究结果不仅对开发实践者和教育工作者具有指导意义,也为开发更优质的LLM辅助软件开发工具提供了重要参考。
English
Large Language Models (LLMs) are nowadays extensively used for various types of software engineering tasks, primarily code generation. Previous research has shown how suitable prompt engineering could help developers in improving their code generation prompts. However, so far, there do not exist specific guidelines driving developers towards writing suitable prompts for code generation. In this work, we derive and evaluate development-specific prompt optimization guidelines. First, we use an iterative, test-driven approach to automatically refine code generation prompts, and we analyze the outcome of this process to identify prompt improvement items that lead to test passes. We use such elements to elicit 10 guidelines for prompt improvement, related to better specifying I/O, pre-post conditions, providing examples, various types of details, or clarifying ambiguities. We conduct an assessment with 50 practitioners, who report their usage of the elicited prompt improvement patterns, as well as their perceived usefulness, which does not always correspond to the actual usage before knowing our guidelines. Our results lead to implications not only for practitioners and educators, but also for those aimed at creating better LLM-aided software development tools.