WaveCoder:具有精细数据生成的广泛且多功能的增强指令调整
WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation
December 20, 2023
作者: Zhaojian Yu, Xin Zhang, Ning Shang, Yangyu Huang, Can Xu, Yishujie Zhao, Wenxiang Hu, Qiufeng Yin
cs.AI
摘要
最近的研究表明,在对高质量指令数据集进行微调后,生成的模型可以获得令人印象深刻的能力,用于解决各种任务。然而,现有的指令数据生成方法通常会产生重复数据,并且在数据质量上不够可控。本文通过将指令数据分类为4个与代码相关的任务,扩展了指令微调的泛化能力,并提出了基于LLM的生成器-判别器数据处理框架,从开源代码中生成多样化、高质量的指令数据。因此,我们介绍了CodeOcean,一个包含20,000个指令实例的数据集,涵盖4个通用的与代码相关的任务,旨在增强指令微调的效果并提高微调模型的泛化能力。随后,我们提出了WaveCoder,一个经过微调的Code LLM,具有广泛且多功能增强的指令微调。该模型专为增强代码语言模型(LLMs)的指令微调而设计。我们的实验证明,Wavecoder模型在相同微调规模下在不同与代码相关任务的泛化能力方面优于其他开源模型。此外,Wavecoder在以前的代码生成任务中表现出高效性。因此,本文对指令数据生成和微调模型领域做出了重要贡献,为增强代码相关任务的性能提供了新的见解和工具。
English
Recent work demonstrates that, after being fine-tuned on a high-quality
instruction dataset, the resulting model can obtain impressive capabilities to
address a wide range of tasks. However, existing methods for instruction data
generation often produce duplicate data and are not controllable enough on data
quality. In this paper, we extend the generalization of instruction tuning by
classifying the instruction data to 4 code-related tasks and propose a
LLM-based Generator-Discriminator data process framework to generate diverse,
high-quality instruction data from open source code. Hence, we introduce
CodeOcean, a dataset comprising 20,000 instruction instances across 4 universal
code-related tasks,which is aimed at augmenting the effectiveness of
instruction tuning and improving the generalization ability of fine-tuned
model. Subsequently, we present WaveCoder, a fine-tuned Code LLM with
Widespread And Versatile Enhanced instruction tuning. This model is
specifically designed for enhancing instruction tuning of Code Language Models
(LLMs). Our experiments demonstrate that Wavecoder models outperform other
open-source models in terms of generalization ability across different
code-related tasks at the same level of fine-tuning scale. Moreover, Wavecoder
exhibits high efficiency in previous code generation tasks. This paper thus
offers a significant contribution to the field of instruction data generation
and fine-tuning models, providing new insights and tools for enhancing
performance in code-related tasks.