ChatCoder:基于聊天的需求细化改进了LLMs的代码生成
ChatCoder: Chat-based Refine Requirement Improves LLMs' Code Generation
November 1, 2023
作者: Zejun Wang, Jia Li, Ge Li, Zhi Jin
cs.AI
摘要
大型语言模型在生成代码以满足人类需求方面表现出色。然而,用自然语言表达的人类需求可能模糊、不完整和含糊不清,导致大型语言模型误解人类需求并产生错误。更糟糕的是,人类用户很难细化需求。为帮助人类用户细化其需求并提高大型语言模型的代码生成性能,我们提出了ChatCoder:一种通过与大型语言模型聊天来细化需求的方法。我们设计了一个聊天方案,其中大型语言模型将引导人类用户细化其需求表达,使其比以往更加精确、明确和完整。实验证明,ChatCoder大大提升了现有大型语言模型的性能。此外,ChatCoder相较于基于细化的方法和通过人类响应微调的LLMs具有优势。
English
Large language models have shown good performances in generating code to meet
human requirements. However, human requirements expressed in natural languages
can be vague, incomplete, and ambiguous, leading large language models to
misunderstand human requirements and make mistakes. Worse, it is difficult for
a human user to refine the requirement. To help human users refine their
requirements and improve large language models' code generation performances,
we propose ChatCoder: a method to refine the requirements via chatting with
large language models. We design a chat scheme in which the large language
models will guide the human users to refine their expression of requirements to
be more precise, unambiguous, and complete than before. Experiments show that
ChatCoder has improved existing large language models' performance by a large
margin. Besides, ChatCoder has the advantage over refine-based methods and LLMs
fine-tuned via human response.