REFINE-AF:一個任務無關的框架,透過自動反饋強化學習生成自我指令來對齊語言模型
REFINE-AF: A Task-Agnostic Framework to Align Language Models via Self-Generated Instructions using Reinforcement Learning from Automated Feedback
May 10, 2025
作者: Aniruddha Roy, Pretam Ray, Abhilash Nandy, Somak Aditya, Pawan Goyal
cs.AI
摘要
基於指令的大型語言模型(LLMs)在多種少樣本或零樣本的自然語言處理(NLP)任務中已證明其有效性。然而,創建人工標註的指令數據既耗時又昂貴,且在數量和任務多樣性上往往受限。先前的研究嘗試通過提出能夠從模型本身以半自動化且任務無關的方式生成指令的框架來應對這一挑戰。這些努力大多依賴於大型僅限API的參數模型,如GPT-3.5(175B),這些模型成本高昂,且受到查詢次數的限制。本文探討了三種開源小型LLMs(如LLaMA 2-7B、LLama 2-13B和Mistral 7B)在使用半自動化框架時的表現,從而減少了為微調LLMs生成指令數據集所需的人為干預、努力和成本。此外,我們展示了將基於強化學習(RL)的訓練算法整合到這一基於LLMs的框架中,能夠帶來進一步的提升。我們對數據集的評估顯示,與先前方法相比,這些基於RL的框架在63-66%的任務中實現了顯著的改進。
English
Instruction-based Large Language Models (LLMs) have proven effective in
numerous few-shot or zero-shot Natural Language Processing (NLP) tasks.
However, creating human-annotated instruction data is time-consuming,
expensive, and often limited in quantity and task diversity. Previous research
endeavors have attempted to address this challenge by proposing frameworks
capable of generating instructions in a semi-automated and task-agnostic manner
directly from the model itself. Many of these efforts have relied on large
API-only parameter-based models such as GPT-3.5 (175B), which are expensive,
and subject to limits on a number of queries. This paper explores the
performance of three open-source small LLMs such as LLaMA 2-7B, LLama 2-13B,
and Mistral 7B, using a semi-automated framework, thereby reducing human
intervention, effort, and cost required to generate an instruction dataset for
fine-tuning LLMs. Furthermore, we demonstrate that incorporating a
Reinforcement Learning (RL) based training algorithm into this LLMs-based
framework leads to further enhancements. Our evaluation of the dataset reveals
that these RL-based frameworks achieve a substantial improvements in 63-66% of
the tasks compared to previous approaches.Summary
AI-Generated Summary