ChatPaper.aiChatPaper

在强化学习编码补全中衡量记忆化

Measuring memorization in RLHF for code completion

June 17, 2024
作者: Aneesh Pappu, Billy Porter, Ilia Shumailov, Jamie Hayes
cs.AI

摘要

人类反馈强化学习(RLHF)已成为将大型模型与用户偏好对齐的主要方法。与微调不同,微调存在许多关于训练数据记忆的研究,但在RLHF对齐过程中记忆是如何受影响或引入的尚不清楚。了解这种关系很重要,因为可能会收集和使用真实用户数据来对齐大型模型;如果用户数据在RLHF过程中被记忆并在后来被复述,这可能引发隐私问题。在这项工作中,我们分析了训练数据记忆如何在RLHF的每个阶段中浮现并传播。我们的研究重点放在代码补全模型上,因为代码补全是大型语言模型最受欢迎的用例之一。我们发现,与直接在这些数据上微调对齐相比,RLHF显著降低了用于奖励建模和强化学习的数据被记忆的机会,但在RLHF的微调阶段已经记忆的示例,在大多数情况下,将在RLHF后继续被记忆。
English
Reinforcement learning with human feedback (RLHF) has become the dominant method to align large models to user preferences. Unlike fine-tuning, for which there are many studies regarding training data memorization, it is not clear how memorization is affected by or introduced in the RLHF alignment process. Understanding this relationship is important as real user data may be collected and used to align large models; if user data is memorized during RLHF and later regurgitated, this could raise privacy concerns. In this work, we analyze how training data memorization can surface and propagate through each phase of RLHF. We focus our study on code completion models, as code completion is one of the most popular use cases for large language models. We find that RLHF significantly decreases the chance that data used for reward modeling and reinforcement learning is memorized, in comparison to aligning via directly fine-tuning on this data, but that examples already memorized during the fine-tuning stage of RLHF, will, in the majority of cases, remain memorized after RLHF.

Summary

AI-Generated Summary

PDF71December 3, 2024