基于随机路径整合的保真度感知推荐解释
Fidelity-Aware Recommendation Explanations via Stochastic Path Integration
November 22, 2025
作者: Oren Barkan, Yahlly Schein, Yehonatan Elisha, Veronika Bogina, Mikhail Baklanov, Noam Koenigstein
cs.AI
摘要
解釋忠實度——用於衡量解釋反映模型真實推理過程的準確性——在推薦系統領域仍存在嚴重的研究不足。本文提出SPINRec(神經推薦解釋的隨機路徑積分法),這是一種模型無關的解決方案,通過將路徑積分技術適應推薦數據的稀疏性和隱式特徵。為突破既有方法的局限,SPINRec採用隨機基線採樣策略:從經驗數據分佈中抽取多個合理用戶畫像並選擇最具忠實度的歸因路徑,而非依賴固定或不切實際的基線進行積分。該設計能同時捕捉已觀測和未觀測交互的影響,生成更穩定且個性化的解釋。我們在三個模型(矩陣分解、變分自編碼器、神經協同過濾)、三個數據集(MovieLens 1M、雅虎音樂、Pinterest)及一套包含基於AUC的擾動曲線和定長診斷的反事實指標上,開展了迄今最全面的忠實度評估。SPINRec在所有基準測試中均表現優異,為推薦系統的可解釋性樹立了新標杆。代碼與評估工具已開源於https://github.com/DeltaLabTLV/SPINRec。
English
Explanation fidelity, which measures how accurately an explanation reflects a model's true reasoning, remains critically underexplored in recommender systems. We introduce SPINRec (Stochastic Path Integration for Neural Recommender Explanations), a model-agnostic approach that adapts path-integration techniques to the sparse and implicit nature of recommendation data. To overcome the limitations of prior methods, SPINRec employs stochastic baseline sampling: instead of integrating from a fixed or unrealistic baseline, it samples multiple plausible user profiles from the empirical data distribution and selects the most faithful attribution path. This design captures the influence of both observed and unobserved interactions, yielding more stable and personalized explanations. We conduct the most comprehensive fidelity evaluation to date across three models (MF, VAE, NCF), three datasets (ML1M, Yahoo! Music, Pinterest), and a suite of counterfactual metrics, including AUC-based perturbation curves and fixed-length diagnostics. SPINRec consistently outperforms all baselines, establishing a new benchmark for faithful explainability in recommendation. Code and evaluation tools are publicly available at https://github.com/DeltaLabTLV/SPINRec.