ChatPaper.aiChatPaper

自對齊指令反向翻譯

Self-Alignment with Instruction Backtranslation

August 11, 2023
作者: Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, Mike Lewis
cs.AI

摘要

我們提出了一種可擴展的方法,通過自動為人類撰寫的文本標記相應的指令,來構建高質量的指令跟隨語言模型。我們的方法名為指令回譯,首先使用在少量種子數據上微調的語言模型和給定的網絡語料庫。種子模型用於通過為網絡文檔生成指令提示(自我增強)來構建訓練示例,然後從這些候選示例中選擇高質量示例(自我精選)。然後使用這些數據來微調一個更強大的模型。在我們方法的兩次迭代上微調LLaMa,可以得到一個模型,在Alpaca排行榜上優於所有其他基於LLaMa的模型,而不依賴蒸餾數據,展示了高效的自我對齊。
English
We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents (self-augmentation), and then selecting high quality examples from among these candidates (self-curation). This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMa-based models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective self-alignment.
PDF423December 15, 2024