ChatPaper.aiChatPaper

无法谈论此事:使语言模型在对话中保持主题一致

CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues

April 4, 2024
作者: Makesh Narsimhan Sreedhar, Traian Rebedea, Shaona Ghosh, Christopher Parisien
cs.AI

摘要

最近关于指导调整数据集的进展主要集中在特定任务,如数学或逻辑推理上。在为使语言模型保持话题相关性以便部署聊天机器人到生产环境方面,存在一个明显的数据空白。我们介绍了CantTalkAboutThis数据集,以帮助语言模型在任务导向的互动中保持专注于当前主题。该数据集包含了涵盖不同领域各种对话主题的合成对话。这些对话中穿插着有意诱使聊天机器人偏离预定义主题的干扰性对话轮。在这个数据集上微调语言模型有助于使它们能够抵御偏离所分配角色,并提高它们相对于通用指导调整的大型语言模型(LLMs)如GPT-4-turbo和Mixtral-Instruct来维持话题连贯性的能力。此外,初步观察表明,在这个数据集上训练模型还可以增强它们在细粒度指令遵循任务上的表现。
English
Recent advancements in instruction-tuning datasets have predominantly focused on specific tasks like mathematical or logical reasoning. There has been a notable gap in data designed for aligning language models to maintain topic relevance in conversations - a critical aspect for deploying chatbots to production. We introduce the CantTalkAboutThis dataset to help language models remain focused on the subject at hand during task-oriented interactions. It consists of synthetic dialogues on a wide range of conversation topics from different domains. These dialogues are interspersed with distractor turns that intentionally divert the chatbot from the predefined topic. Fine-tuning language models on this dataset helps make them resilient to deviating from the role assigned and improves their ability to maintain topical coherence compared to general-purpose instruction-tuned LLMs like GPT-4-turbo and Mixtral-Instruct. Additionally, preliminary observations suggest that training models on this dataset also enhance their performance on fine-grained instruction following tasks.

Summary

AI-Generated Summary

PDF275December 15, 2024