對齊工作室:將大型語言模型與特定的情境規則對齊
Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
March 8, 2024
作者: Swapnaja Achintalwar, Ioana Baldini, Djallel Bouneffouf, Joan Byamugisha, Maria Chang, Pierre Dognin, Eitan Farchi, Ndivhuwo Makondo, Aleksandra Mojsilovic, Manish Nagireddy, Karthikeyan Natesan Ramamurthy, Inkit Padhi, Orna Raz, Jesus Rios, Prasanna Sattigeri, Moninder Singh, Siphiwe Thwala, Rosario A. Uceda-Sosa, Kush R. Varshney
cs.AI
摘要
大型語言模型的對齊通常由模型提供者執行,以添加或控制在各種使用案例和情境中普遍理解的行為。相較之下,在本文中,我們提出了一種方法和架構,讓應用程式開發人員調整模型以符合其特定價值觀、社會規範、法律和其他規定,並在情境中協調潛在衝突的需求。我們介紹了這種對齊工作室架構的三個主要組件:框架師、指導員和審計員,它們共同協作以控制語言模型的行為。我們以一個實例說明這種方法,即將公司內部企業聊天機器人對齊到其業務行為準則。
English
The alignment of large language models is usually done by model providers to
add or control behaviors that are common or universally understood across use
cases and contexts. In contrast, in this article, we present an approach and
architecture that empowers application developers to tune a model to their
particular values, social norms, laws and other regulations, and orchestrate
between potentially conflicting requirements in context. We lay out three main
components of such an Alignment Studio architecture: Framers, Instructors, and
Auditors that work in concert to control the behavior of a language model. We
illustrate this approach with a running example of aligning a company's
internal-facing enterprise chatbot to its business conduct guidelines.Summary
AI-Generated Summary