ChatPaper.aiChatPaper

使用指导反向翻译进行自对齐

Self-Alignment with Instruction Backtranslation

August 11, 2023
作者: Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, Mike Lewis
cs.AI

摘要

我们提出了一种可扩展的方法,通过自动为人类编写的文本标记相应的指令,构建高质量的指令跟踪语言模型。我们的方法名为指令回译,首先使用在少量种子数据和给定网络语料库上微调的语言模型,开始。种子模型用于通过为网络文档生成指令提示(自我增强),从中选择高质量的示例(自我筛选)来构建训练示例。然后使用这些数据对模型进行微调。在我们方法的两次迭代中微调LLaMa,得到一个模型,其性能优于Alpaca排行榜上所有其他基于LLaMa的模型,而无需依赖蒸馏数据,展示了高度有效的自对齐。
English
We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents (self-augmentation), and then selecting high quality examples from among these candidates (self-curation). This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMa-based models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective self-alignment.
PDF423December 15, 2024