Sequential recommendations (SR) with transformer-based architectures are widely adopted in real-world applications, where SR models require frequent retraining to adapt to ever-changing user preferences. However, training transformer-based SR models often encounters a high computational cost associated with scoring extensive item catalogs, often exceeding thousands of items. This occurs mainly due to the use of cross-entropy loss, where peak memory scales proportionally to catalog size, batch size, and sequence length. Recognizing this, practitioners in the field of recommendation systems typically address memory consumption by integrating the cross-entropy (CE) loss with negative sampling, thereby reducing the explicit memory demands of the final layer. However, a small number of negative samples would degrade model performance, and as we demonstrate in our work, increasing the number of negative samples and the batch size further improves the model's performance, but rapidly starts to exceed industrial GPUs' size (~40Gb). In this work, we introduce the CCE- method, which offers a GPU-efficient implementation of the CE loss with negative sampling. Our method accelerates training by up to two times while reducing memory consumption by more than 10 times. Leveraging the memory savings afforded by using CCE- for model training, it becomes feasible to enhance its accuracy on datasets with a large item catalog compared to those trained with original PyTorch-implemented loss functions. Finally, we perform an analysis of key memory-related hyperparameters and highlight the necessity of a delicate balance among these factors. We demonstrate that scaling both the number of negative samples and batch size leads to better results rather than maximizing only one of them. To facilitate further adoption of CCE-, we release a Triton kernel that efficiently implements the proposed method.
翻译:基于Transformer架构的序列推荐(SR)在现实应用中已被广泛采用,其中SR模型需要频繁重新训练以适应不断变化的用户偏好。然而,基于Transformer的SR模型训练常面临与评估大规模商品目录(通常包含数千个商品)相关的高计算成本。这主要源于交叉熵损失函数的使用,其峰值内存消耗与商品目录大小、批次大小和序列长度成正比。认识到这一点,推荐系统领域的从业者通常通过将交叉熵(CE)损失与负采样相结合来应对内存消耗,从而降低输出层的显式内存需求。然而,过少的负样本会降低模型性能,而正如我们在工作中所展示的,增加负样本数量和批次大小虽能进一步提升模型性能,但会迅速超出工业级GPU的内存容量(约40Gb)。在本研究中,我们提出了CCE-方法,它提供了一种GPU高效的交叉熵损失与负采样实现方案。该方法将训练速度提升至多两倍,同时将内存消耗降低超过十倍。利用CCE-方法在模型训练中节省的内存,与使用原始PyTorch实现的损失函数训练的模型相比,在具有大规模商品目录的数据集上提升模型精度成为可能。最后,我们对关键的内存相关超参数进行了分析,并强调了这些因素之间精细平衡的必要性。我们证明了同时扩展负样本数量和批次大小能带来更好的结果,而非仅最大化其中单一因素。为促进CCE-方法的进一步应用,我们发布了一个高效实现该方法的Triton内核。