ChatPaper.aiChatPaper

多任务端到端训练改善对话式推荐

Multi-Task End-to-End Training Improves Conversational Recommendation

May 8, 2023
作者: Naveen Ram, Dima Kuzmin, Ellie Ka In Chio, Moustafa Farid Alzantot, Santiago Ontanon, Ambarish Jash, Judith Yue Li
cs.AI

摘要

本文分析了一种多任务端到端变压器模型在会话推荐任务上的性能,该任务旨在根据用户在对话中表达的明确偏好进行推荐。虽然该领域先前的研究采用复杂的多组件方法,其中对话管理和实体推荐任务由单独的组件处理,但我们表明,基于T5文本到文本变压器模型的统一变压器模型在推荐相关项目和生成对话对话方面可以有竞争力。我们在ReDIAL会话式电影推荐数据集上对模型进行微调,并在多任务学习设置中创建了源自MovieLens的额外训练任务(例如根据输入电影预测电影属性和相关电影)。通过一系列探测性研究,我们展示了在额外任务中学到的知识如何转移到会话设置中,其中每个任务导致其相关探测分数增加了9%至52%。
English
In this paper, we analyze the performance of a multitask end-to-end transformer model on the task of conversational recommendations, which aim to provide recommendations based on a user's explicit preferences expressed in dialogue. While previous works in this area adopt complex multi-component approaches where the dialogue management and entity recommendation tasks are handled by separate components, we show that a unified transformer model, based on the T5 text-to-text transformer model, can perform competitively in both recommending relevant items and generating conversation dialogue. We fine-tune our model on the ReDIAL conversational movie recommendation dataset, and create additional training tasks derived from MovieLens (such as the prediction of movie attributes and related movies based on an input movie), in a multitask learning setting. Using a series of probe studies, we demonstrate that the learned knowledge in the additional tasks is transferred to the conversational setting, where each task leads to a 9%-52% increase in its related probe score.
PDF10December 15, 2024