ChatPaper.aiChatPaper

大型语言模型在语言和基于项目的偏好方面几乎可以与冷启动推荐系统竞争。

Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences

July 26, 2023
作者: Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, Lucas Dixon
cs.AI

摘要

传统的推荐系统利用用户的物品偏好历史来推荐用户可能喜欢的新内容。然而,允许用户表达基于语言的偏好的现代对话界面提供了一种根本不同的偏好输入模式。受大型语言模型(LLMs)提示范式的最近成功启发,我们研究了它们在基于物品和基于语言偏好的推荐中与最先进的基于物品的协同过滤(CF)方法的比较中的应用。为了支持这项研究,我们收集了一个新数据集,其中包含从用户那里引出的基于物品和基于语言的偏好,以及他们对各种(有偏见的)推荐物品和(无偏见的)随机物品的评分。在众多实验结果中,我们发现LLMs在纯基于语言偏好(无物品偏好)的情况下,在接近冷启动情况下与基于物品的CF方法相比,提供了有竞争力的推荐性能,尽管它们没有针对这个特定任务进行监督训练(零-shot)或只有少量标签(少-shot)。这特别令人鼓舞,因为基于语言偏好的表示比基于物品或基于向量的表示更具解释性和可解释性。
English
Traditional recommender systems leverage users' item preference history to recommend novel content that users may like. However, modern dialog interfaces that allow users to express language-based preferences offer a fundamentally different modality for preference input. Inspired by recent successes of prompting paradigms for large language models (LLMs), we study their use for making recommendations from both item-based and language-based preferences in comparison to state-of-the-art item-based collaborative filtering (CF) methods. To support this investigation, we collect a new dataset consisting of both item-based and language-based preferences elicited from users along with their ratings on a variety of (biased) recommended items and (unbiased) random items. Among numerous experimental results, we find that LLMs provide competitive recommendation performance for pure language-based preferences (no item preferences) in the near cold-start case in comparison to item-based CF methods, despite having no supervised training for this specific task (zero-shot) or only a few labels (few-shot). This is particularly promising as language-based preference representations are more explainable and scrutable than item-based or vector-based representations.
PDF90December 15, 2024