ChatPaper.aiChatPaper

無法談論此事:調整語言模型以保持對話主題一致

CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues

April 4, 2024
作者: Makesh Narsimhan Sreedhar, Traian Rebedea, Shaona Ghosh, Christopher Parisien
cs.AI

摘要

最近在指導調整數據集方面的進展主要集中在特定任務,如數學或邏輯推理。在為語言模型對話中保持話題相關性的數據方面存在明顯的差距,這對於將聊天機器人應用於生產環境至關重要。我們引入了CantTalkAboutThis數據集,以幫助語言模型在任務導向互動中保持專注於手頭的主題。該數據集包含來自不同領域的各種對話主題的合成對話。這些對話中穿插著故意使聊天機器人偏離預定主題的干擾者輪。在這個數據集上對語言模型進行微調有助於使它們能夠抵抗偏離分配角色並提高與通用指導調整LLM(如GPT-4-turbo和Mixtral-Instruct)相比保持話題連貫性的能力。此外,初步觀察表明,在這個數據集上訓練模型還可以增強它們在細粒度指導遵循任務上的表現。
English
Recent advancements in instruction-tuning datasets have predominantly focused on specific tasks like mathematical or logical reasoning. There has been a notable gap in data designed for aligning language models to maintain topic relevance in conversations - a critical aspect for deploying chatbots to production. We introduce the CantTalkAboutThis dataset to help language models remain focused on the subject at hand during task-oriented interactions. It consists of synthetic dialogues on a wide range of conversation topics from different domains. These dialogues are interspersed with distractor turns that intentionally divert the chatbot from the predefined topic. Fine-tuning language models on this dataset helps make them resilient to deviating from the role assigned and improves their ability to maintain topical coherence compared to general-purpose instruction-tuned LLMs like GPT-4-turbo and Mixtral-Instruct. Additionally, preliminary observations suggest that training models on this dataset also enhance their performance on fine-grained instruction following tasks.

Summary

AI-Generated Summary

PDF275December 15, 2024