SPAR:通过长期参与的个性化基于内容的推荐 注意力
SPAR: Personalized Content-Based Recommendation via Long Engagement Attention
February 16, 2024
作者: Chiyu Zhang, Yifei Sun, Jun Chen, Jie Lei, Muhammad Abdul-Mageed, Sinong Wang, Rong Jin, Sem Park, Ning Yao, Bo Long
cs.AI
摘要
利用用户长期参与历史对于个性化内容推荐至关重要。在自然语言处理中,预训练语言模型(PLMs)的成功导致它们被用于对用户历史和候选项进行编码,将内容推荐构建为文本语义匹配任务。然而,现有研究在处理非常长的用户历史文本和不足的用户-项交互方面仍然存在困难。在本文中,我们介绍了一个基于内容的推荐框架,SPAR,它有效地解决了从长期用户参与历史中提取整体用户兴趣的挑战。它通过利用PLM、多头注意力层和注意力稀疏机制以会话为基础对用户历史进行编码。用户和项的特征被充分融合以进行参与预测,同时保持双方的独立表示,这对于实际模型部署是高效的。此外,我们通过利用大型语言模型(LLM)从用户参与历史中提取全局兴趣来增强用户画像。在两个基准数据集上进行的大量实验表明,我们的框架优于现有的最先进方法。
English
Leveraging users' long engagement histories is essential for personalized
content recommendations. The success of pretrained language models (PLMs) in
NLP has led to their use in encoding user histories and candidate items,
framing content recommendations as textual semantic matching tasks. However,
existing works still struggle with processing very long user historical text
and insufficient user-item interaction. In this paper, we introduce a
content-based recommendation framework, SPAR, which effectively tackles the
challenges of holistic user interest extraction from the long user engagement
history. It achieves so by leveraging PLM, poly-attention layers and attention
sparsity mechanisms to encode user's history in a session-based manner. The
user and item side features are sufficiently fused for engagement prediction
while maintaining standalone representations for both sides, which is efficient
for practical model deployment. Moreover, we enhance user profiling by
exploiting large language model (LLM) to extract global interests from user
engagement history. Extensive experiments on two benchmark datasets demonstrate
that our framework outperforms existing state-of-the-art (SoTA) methods.