对齐工作室:将大型语言模型与特定的上下文规则对齐
Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
March 8, 2024
作者: Swapnaja Achintalwar, Ioana Baldini, Djallel Bouneffouf, Joan Byamugisha, Maria Chang, Pierre Dognin, Eitan Farchi, Ndivhuwo Makondo, Aleksandra Mojsilovic, Manish Nagireddy, Karthikeyan Natesan Ramamurthy, Inkit Padhi, Orna Raz, Jesus Rios, Prasanna Sattigeri, Moninder Singh, Siphiwe Thwala, Rosario A. Uceda-Sosa, Kush R. Varshney
cs.AI
摘要
通常,大型语言模型的对齐是由模型提供者执行的,以增加或控制跨用例和上下文中普遍理解的行为。相比之下,在本文中,我们提出了一种方法和架构,赋予应用开发者调整模型以符合其特定价值观、社会规范、法律和其他法规,并在上下文中协调潜在冲突要求的能力。我们阐明了这种对齐工作室架构的三个主要组成部分:框架构建者、教练和审计员,它们协同工作以控制语言模型的行为。我们通过一个实例来说明这种方法,即将公司内部企业聊天机器人与其业务行为准则对齐。
English
The alignment of large language models is usually done by model providers to
add or control behaviors that are common or universally understood across use
cases and contexts. In contrast, in this article, we present an approach and
architecture that empowers application developers to tune a model to their
particular values, social norms, laws and other regulations, and orchestrate
between potentially conflicting requirements in context. We lay out three main
components of such an Alignment Studio architecture: Framers, Instructors, and
Auditors that work in concert to control the behavior of a language model. We
illustrate this approach with a running example of aligning a company's
internal-facing enterprise chatbot to its business conduct guidelines.Summary
AI-Generated Summary