通過從有機互動中學習來改善開放式語言模型
Improving Open Language Models by Learning from Organic Interactions
June 7, 2023
作者: Jing Xu, Da Ju, Joshua Lane, Mojtaba Komeili, Eric Michael Smith, Megan Ung, Morteza Behrooz, William Ngan, Rashel Moritz, Sainbayar Sukhbaatar, Y-Lan Boureau, Jason Weston, Kurt Shuster
cs.AI
摘要
我們介紹了BlenderBot 3x,這是對對話模型BlenderBot 3的更新,現在使用來自系統參與用戶的有機對話和反饋數據進行訓練,以提高其技能和安全性。我們公開發布參與者的去識別化互動數據,供研究社區使用,以推動進一步的進展。使用有機數據訓練模型具有挑戰性,因為與人們的“野外”互動包括高質量的對話和反饋,以及對抗性和有毒行為。我們研究了一些技術,使模型能夠從有益的教師那裡學習,同時避免從試圖欺騙模型以獲得無益或有毒回應的人那裡學習。BlenderBot 3x在對話中備受青睞,並在具有挑戰性的情況下顯示出能夠產生更安全回應。儘管我們目前的模型仍然遠非完美,但我們相信通過繼續使用本研究中探索的技術,可以實現進一步的改進。
English
We present BlenderBot 3x, an update on the conversational model BlenderBot 3,
which is now trained using organic conversation and feedback data from
participating users of the system in order to improve both its skills and
safety. We are publicly releasing the participating de-identified interaction
data for use by the research community, in order to spur further progress.
Training models with organic data is challenging because interactions with
people "in the wild" include both high quality conversations and feedback, as
well as adversarial and toxic behavior. We study techniques that enable
learning from helpful teachers while avoiding learning from people who are
trying to trick the model into unhelpful or toxic responses. BlenderBot 3x is
both preferred in conversation to BlenderBot 3, and is shown to produce safer
responses in challenging situations. While our current models are still far
from perfect, we believe further improvement can be achieved by continued use
of the techniques explored in this work.