ChatPaper.aiChatPaper

NeMo-Aligner:用於高效模型對齊的可擴展工具包

NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment

May 2, 2024
作者: Gerald Shen, Zhilin Wang, Olivier Delalleau, Jiaqi Zeng, Yi Dong, Daniel Egert, Shengyang Sun, Jimmy Zhang, Sahil Jain, Ali Taghibakhshi, Markel Sanz Ausin, Ashwath Aithal, Oleksii Kuchaiev
cs.AI

摘要

對齊大型語言模型(LLMs)與人類價值觀和偏好對於使其有用且安全至關重要。然而,建立有效的工具來進行對齊可能會具有挑戰性,特別是對於通常包含數百億參數的最大和最具競爭力的LLMs。我們創建了NeMo-Aligner,這是一個用於模型對齊的工具包,可以有效地擴展至使用數百個GPU進行訓練。NeMo-Aligner具有高度優化和可擴展的實現,用於主要的模型對齊範式,如:從人類反饋中進行強化學習(RLHF)、直接偏好優化(DPO)、SteerLM和自我對弈微調(SPIN)。此外,我們的工具包支持在參數效率微調(PEFT)設置中運行大多數對齊技術。NeMo-Aligner設計用於可擴展性,可以輕鬆支持其他對齊技術,並且是以Apache 2.0許可證開源的,歡迎在https://github.com/NVIDIA/NeMo-Aligner 提交社區貢獻。
English
Aligning Large Language Models (LLMs) with human values and preferences is essential for making them helpful and safe. However, building efficient tools to perform alignment can be challenging, especially for the largest and most competent LLMs which often contain tens or hundreds of billions of parameters. We create NeMo-Aligner, a toolkit for model alignment that can efficiently scale to using hundreds of GPUs for training. NeMo-Aligner comes with highly optimized and scalable implementations for major paradigms of model alignment such as: Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), SteerLM, and Self-Play Fine-Tuning (SPIN). Additionally, our toolkit supports running most of the alignment techniques in a Parameter Efficient Fine-Tuning (PEFT) setting. NeMo-Aligner is designed for extensibility, allowing support for other alignment techniques with minimal effort. It is open-sourced with Apache 2.0 License and we invite community contributions at https://github.com/NVIDIA/NeMo-Aligner

Summary

AI-Generated Summary

PDF311December 15, 2024