ChatPaper.aiChatPaper

超越人类数据:利用语言模型扩展自我训练以解决问题

Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

December 11, 2023
作者: Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Peter J. Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, Abhishek Kumar, Alex Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao Xiao, Maxwell L. Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi Qian, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, Noah Fiedel
cs.AI

摘要

在人类生成的数据上对语言模型(LMs)进行微调仍然是一种普遍的做法。然而,这些模型的性能通常受限于高质量人类数据的数量和多样性。本文探讨了在具有标量反馈的任务上是否可以超越人类数据,例如在数学问题上可以验证正确性。为此,我们研究了一种基于期望最大化的简单自训练方法,称为ReST^{EM},其中我们(1)从模型中生成样本并使用二进制反馈进行过滤,(2)在这些样本上对模型进行微调,然后(3)重复这个过程几次。在使用PaLM-2模型对高级MATH推理和APPS编码基准进行测试时,我们发现ReST^{EM}随着模型规模的增大而扩展,并且明显优于仅在人类数据上进行微调。总的来说,我们的研究结果表明,带有反馈的自训练可以大幅减少对人类生成数据的依赖。
English
Fine-tuning language models~(LMs) on human-generated data remains a prevalent practice. However, the performance of such models is often limited by the quantity and diversity of high-quality human data. In this paper, we explore whether we can go beyond human data on tasks where we have access to scalar feedback, for example, on math problems where one can verify correctness. To do so, we investigate a simple self-training method based on expectation-maximization, which we call ReST^{EM}, where we (1) generate samples from the model and filter them using binary feedback, (2) fine-tune the model on these samples, and (3) repeat this process a few times. Testing on advanced MATH reasoning and APPS coding benchmarks using PaLM-2 models, we find that ReST^{EM} scales favorably with model size and significantly surpasses fine-tuning only on human data. Overall, our findings suggest self-training with feedback can substantially reduce dependence on human-generated data.
PDF293December 15, 2024