ChatPaper.aiChatPaper

PERL:來自人類反饋的參數高效強化學習

PERL: Parameter Efficient Reinforcement Learning from Human Feedback

March 15, 2024
作者: Hakim Sidahmed, Samrat Phatale, Alex Hutcheson, Zhuonan Lin, Zhang Chen, Zac Yu, Jarvis Jin, Roman Komarytsia, Christiane Ahlheim, Yonghao Zhu, Simral Chaudhary, Bowen Li, Saravanan Ganesh, Bill Byrne, Jessica Hoffmann, Hassan Mansoor, Wei Li, Abhinav Rastogi, Lucas Dixon
cs.AI

摘要

從人類反饋中學習的強化學習(RLHF)已被證明是一種強大的方法,可以使預訓練的大型語言模型(LLMs)與人類偏好保持一致。但使用RLHF訓練模型在計算上是昂貴的,並且是一個整體複雜的過程。在這項工作中,我們研究了在使用由胡等人(2021年)引入的低秩適應(LoRA)參數高效方法訓練底層模型的RLHF。我們研究了“參數高效強化學習”(PERL)的設置,其中我們使用LoRA進行獎勵模型訓練和強化學習。我們將PERL與傳統的微調(全調整)在包括2個新數據集在內的7個基準配置中進行比較,這些數據集涉及獎勵建模和強化學習。我們發現PERL的性能與傳統的RLHF設置相當,同時訓練速度更快,並且佔用的記憶體更少。這使得RLHF能夠高效運行,同時減輕了限制其作為大型語言模型對齊技術的採用的計算負擔。我們還釋出了兩個新的好評/差評偏好數據集:“Taskmaster Coffee”和“Taskmaster Ticketing”,以促進圍繞RLHF的研究。
English
Reinforcement Learning from Human Feedback (RLHF) has proven to be a strong method to align Pretrained Large Language Models (LLMs) with human preferences. But training models with RLHF is computationally expensive, and an overall complex process. In this work, we study RLHF where the underlying models are trained using the parameter efficient method of Low-Rank Adaptation (LoRA) introduced by Hu et al. [2021]. We investigate the setup of "Parameter Efficient Reinforcement Learning" (PERL), in which we perform reward model training and reinforcement learning using LoRA. We compare PERL to conventional fine-tuning (full-tuning) across various configurations for 7 benchmarks, including 2 novel datasets, of reward modeling and reinforcement learning. We find that PERL performs on par with the conventional RLHF setting, while training faster, and with less memory. This enables the high performance of RLHF, while reducing the computational burden that limits its adoption as an alignment technique for Large Language Models. We also release 2 novel thumbs up/down preference datasets: "Taskmaster Coffee", and "Taskmaster Ticketing" to promote research around RLHF.

Summary

AI-Generated Summary

PDF604December 15, 2024