ChatPaper.aiChatPaper

sDPO:不要一次性使用您的数据

sDPO: Don't Use Your Data All at Once

March 28, 2024
作者: Dahyun Kim, Yungi Kim, Wonho Song, Hyeonwoo Kim, Yunsu Kim, Sanghoon Kim, Chanjun Park
cs.AI

摘要

随着大型语言模型(LLM)的发展,将它们与人类偏好相一致变得日益重要。我们提出了分步DPO(sDPO),这是对最近流行的直接偏好优化(DPO)进行扩展,用于对齐调整。该方法涉及将可用的偏好数据集分成几部分,并以分步方式利用它们,而不是一次性全部使用。我们证明了这种方法有助于在DPO训练框架内使用更精确对齐的参考模型。此外,sDPO训练最终模型更具性能,甚至胜过其他具有更多参数的流行LLM。
English
As development of large language models (LLM) progresses, aligning them with human preferences has become increasingly important. We propose stepwise DPO (sDPO), an extension of the recently popularized direct preference optimization (DPO) for alignment tuning. This approach involves dividing the available preference datasets and utilizing them in a stepwise manner, rather than employing it all at once. We demonstrate that this method facilitates the use of more precisely aligned reference models within the DPO training framework. Furthermore, sDPO trains the final model to be more performant, even outperforming other popular LLMs with more parameters.
PDF423December 15, 2024