SPAR:透過長期參與關注的個性化基於內容的推薦
SPAR: Personalized Content-Based Recommendation via Long Engagement Attention
February 16, 2024
作者: Chiyu Zhang, Yifei Sun, Jun Chen, Jie Lei, Muhammad Abdul-Mageed, Sinong Wang, Rong Jin, Sem Park, Ning Yao, Bo Long
cs.AI
摘要
充分利用使用者長期參與歷史對於個性化內容推薦至關重要。預訓練語言模型(PLMs)在自然語言處理(NLP)中的成功導致它們被用於編碼使用者歷史和候選項目,將內容推薦框架為文本語義匹配任務。然而,現有研究仍在處理非常長的使用者歷史文本和不足的使用者-項目互動方面遇到困難。本文介紹了一個基於內容的推薦框架,名為SPAR,有效應對從長期使用者參與歷史中提取整體使用者興趣的挑戰。它通過利用PLM、多頭注意力層和注意力稀疏機制以會話為基礎的方式編碼使用者歷史來實現這一目標。使用者和項目方面的特徵被充分融合以進行參與預測,同時保持雙方的獨立表示,這對於實際模型部署是高效的。此外,我們通過利用大型語言模型(LLM)從使用者參與歷史中提取全局興趣來增強使用者個人資料。在兩個基準數據集上進行的大量實驗表明,我們的框架優於現有的最先進方法。
English
Leveraging users' long engagement histories is essential for personalized
content recommendations. The success of pretrained language models (PLMs) in
NLP has led to their use in encoding user histories and candidate items,
framing content recommendations as textual semantic matching tasks. However,
existing works still struggle with processing very long user historical text
and insufficient user-item interaction. In this paper, we introduce a
content-based recommendation framework, SPAR, which effectively tackles the
challenges of holistic user interest extraction from the long user engagement
history. It achieves so by leveraging PLM, poly-attention layers and attention
sparsity mechanisms to encode user's history in a session-based manner. The
user and item side features are sufficiently fused for engagement prediction
while maintaining standalone representations for both sides, which is efficient
for practical model deployment. Moreover, we enhance user profiling by
exploiting large language model (LLM) to extract global interests from user
engagement history. Extensive experiments on two benchmark datasets demonstrate
that our framework outperforms existing state-of-the-art (SoTA) methods.