Explanation fidelity, which measures how accurately an explanation reflects a model's true reasoning, remains critically underexplored in recommender systems. We introduce SPINRec (Stochastic Path Integration for Neural Recommender Explanations), a model-agnostic approach that adapts path-integration techniques to the sparse and implicit nature of recommendation data. To overcome the limitations of prior methods, SPINRec employs stochastic baseline sampling: instead of integrating from a fixed or unrealistic baseline, it samples multiple plausible user profiles from the empirical data distribution and selects the most faithful attribution path. This design captures the influence of both observed and unobserved interactions, yielding more stable and personalized explanations. We conduct the most comprehensive fidelity evaluation to date across three models (MF, VAE, NCF), three datasets (ML1M, Yahoo! Music, Pinterest), and a suite of counterfactual metrics, including AUC-based perturbation curves and fixed-length diagnostics. SPINRec consistently outperforms all baselines, establishing a new benchmark for faithful explainability in recommendation. Code and evaluation tools are publicly available at https://github.com/DeltaLabTLV/SPINRec.
翻译:解释保真度——衡量解释准确反映模型真实推理过程的指标——在推荐系统中仍严重缺乏探索。本文提出SPINRec(面向神经推荐解释的随机路径积分方法),一种模型无关的框架,将路径积分技术适配于推荐数据稀疏且隐式的特性。为克服现有方法的局限,SPINRec采用随机基线采样策略:不再从固定或不现实的基线进行积分,而是从经验数据分布中采样多个合理的用户画像,并选择最具忠实度的归因路径。该设计能同时捕捉已观测和未观测交互的影响,生成更稳定且个性化的解释。我们在三个模型(矩阵分解、变分自编码器、神经协同过滤)、三个数据集(MovieLens 1M、雅虎音乐、Pinterest)及一系列反事实评估指标(包括基于AUC的扰动曲线和定长诊断)上开展了迄今最全面的保真度评估。SPINRec在所有基线方法中均表现优异,为推荐系统的忠实可解释性确立了新基准。代码与评估工具已公开于https://github.com/DeltaLabTLV/SPINRec。