基于随机路径整合的保真度感知推荐解释
Fidelity-Aware Recommendation Explanations via Stochastic Path Integration
November 22, 2025
作者: Oren Barkan, Yahlly Schein, Yehonatan Elisha, Veronika Bogina, Mikhail Baklanov, Noam Koenigstein
cs.AI
摘要
解释保真度——用于衡量解释反映模型真实推理过程的准确性——在推荐系统中仍属关键性未充分探索领域。本文提出SPINRec(神经推荐解释的随机路径积分方法),这一模型无关的框架将路径积分技术适配于推荐数据稀疏性与隐式性特点。为突破现有方法的局限,SPINRec采用随机基线采样策略:通过从经验数据分布中抽取多个合理用户画像并选择最具可信度的归因路径,取代传统固定或不切实际的基线积分方式。该设计能同时捕捉已观测与未观测交互的影响,生成更稳定且个性化的解释。我们在三种模型(矩阵分解、变分自编码器、神经协同过滤)、三个数据集(MovieLens 1M、雅虎音乐、Pinterest)及一套包含基于AUC的扰动曲线和定长诊断的反事实指标上开展了迄今最全面的保真度评估。SPINRec在所有基线方法中均表现优异,为推荐系统的可信解释建立了新基准。代码与评估工具已开源:https://github.com/DeltaLabTLV/SPINRec。
English
Explanation fidelity, which measures how accurately an explanation reflects a model's true reasoning, remains critically underexplored in recommender systems. We introduce SPINRec (Stochastic Path Integration for Neural Recommender Explanations), a model-agnostic approach that adapts path-integration techniques to the sparse and implicit nature of recommendation data. To overcome the limitations of prior methods, SPINRec employs stochastic baseline sampling: instead of integrating from a fixed or unrealistic baseline, it samples multiple plausible user profiles from the empirical data distribution and selects the most faithful attribution path. This design captures the influence of both observed and unobserved interactions, yielding more stable and personalized explanations. We conduct the most comprehensive fidelity evaluation to date across three models (MF, VAE, NCF), three datasets (ML1M, Yahoo! Music, Pinterest), and a suite of counterfactual metrics, including AUC-based perturbation curves and fixed-length diagnostics. SPINRec consistently outperforms all baselines, establishing a new benchmark for faithful explainability in recommendation. Code and evaluation tools are publicly available at https://github.com/DeltaLabTLV/SPINRec.