Sequential recommendation models must navigate sparse interaction data popularity bias and conflicting objectives like accuracy versus diversity While recent contrastive selfsupervised learning SSL methods offer improved accuracy they come with tradeoffs large batch requirements reliance on handcrafted augmentations and negative sampling that can reinforce popularity bias In this paper we introduce BT-SR a novel noncontrastive SSL framework that integrates the Barlow Twins redundancyreduction principle into a Transformerbased nextitem recommender BTSR learns embeddings that align users with similar shortterm behaviors while preserving longterm distinctionswithout requiring negative sampling or artificial perturbations This structuresensitive alignment allows BT-SR to more effectively recognize emerging user intent and mitigate the influence of noisy historical context Our experiments on five public benchmarks demonstrate that BTSR consistently improves nextitem prediction accuracy and significantly enhances longtail item coverage and recommendation calibration Crucially we show that a single hyperparameter can control the accuracydiversity tradeoff enabling practitioners to adapt recommendations to specific application needs
翻译:序列推荐模型必须处理稀疏的交互数据、流行度偏差以及诸如准确性与多样性之间的冲突目标。尽管近期的对比自监督学习方法在准确性方面有所提升,但它们存在一些权衡:需要大批量数据、依赖手工设计的增强策略以及可能加剧流行度偏差的负采样。本文提出BT-SR,一种新颖的非对比自监督学习框架,将Barlow Twins的冗余减少原则集成到基于Transformer的下一个物品推荐器中。BT-SR学习到的嵌入能够对齐具有相似短期行为的用户,同时保留长期差异,无需负采样或人工扰动。这种结构敏感的对齐使BT-SR能更有效地识别新兴用户意图,并减轻噪声历史上下文的影响。我们在五个公共基准测试上的实验表明,BT-SR持续提升下一个物品预测的准确性,并显著增强长尾物品覆盖率和推荐校准。关键的是,我们证明单个超参数可以控制准确性与多样性之间的权衡,使实践者能够根据具体应用需求调整推荐。