ChatPaper.aiChatPaper

步驟控制的DPO:利用逐步錯誤以增強數學推理

Step-Controlled DPO: Leveraging Stepwise Error for Enhanced Mathematical Reasoning

June 30, 2024
作者: Zimu Lu, Aojun Zhou, Ke Wang, Houxing Ren, Weikang Shi, Junting Pan, Mingjie Zhan
cs.AI

摘要

直接偏好優化(Direct Preference Optimization,DPO)已被證明對於提升大型語言模型(Large Language Models,LLMs)在推理和對齊等下游任務上的表現效果顯著。在這項研究中,我們提出了步驟控制的DPO(Step-Controlled DPO,SCDPO),一種通過創建從特定步驟開始出現錯誤的數學推理原理的負樣本,自動提供逐步錯誤監督的方法。通過將這些樣本應用於DPO訓練中,SCDPO可以更好地使模型對理解推理錯誤並輸出準確的推理步驟進行調整。我們將SCDPO應用於代碼集成和思維鏈解決方案,實證表明它相對於單純的DPO在三種不同的SFT模型上均能持續改善性能,包括一個現有的SFT模型和我們微調的兩個模型。SCDPO和DPO的信用分配的定性分析表明了SCDPO在識別數學解決方案中的錯誤方面的有效性。然後,我們將SCDPO應用於InternLM2-20B模型,得到一個在GSM8K上達到88.5%、在MATH上達到58.1%的20B模型,與所有其他開源LLMs相媲美,展示了我們方法的巨大潛力。
English
Direct Preference Optimization (DPO) has proven effective at improving the performance of large language models (LLMs) on downstream tasks such as reasoning and alignment. In this work, we propose Step-Controlled DPO (SCDPO), a method for automatically providing stepwise error supervision by creating negative samples of mathematical reasoning rationales that start making errors at a specified step. By applying these samples in DPO training, SCDPO can better align the model to understand reasoning errors and output accurate reasoning steps. We apply SCDPO to both code-integrated and chain-of-thought solutions, empirically showing that it consistently improves the performance compared to naive DPO on three different SFT models, including one existing SFT model and two models we finetuned. Qualitative analysis of the credit assignment of SCDPO and DPO demonstrates the effectiveness of SCDPO at identifying errors in mathematical solutions. We then apply SCDPO to an InternLM2-20B model, resulting in a 20B model that achieves high scores of 88.5% on GSM8K and 58.1% on MATH, rivaling all other open-source LLMs, showing the great potential of our method.

Summary

AI-Generated Summary

PDF264November 28, 2024