Graph neural networks (GNNs) have advanced recommender systems by modeling interaction relationships. However, existing graph-based recommenders rely on sparse ID features and do not fully exploit textual information, resulting in low information density within representations. Furthermore, graph contrastive learning faces challenges. Random negative sampling can introduce false negative samples, while fixed temperature coefficients cannot adapt to the heterogeneity of different nodes. In addition, current efforts to enhance recommendations with large language models (LLMs) have not fully utilized their Chain-of-Thought (CoT) reasoning capabilities to guide representation learning. To address these limitations, we introduces LGHRec (LLM-CoT Enhanced Graph Neural Recommendation with Harmonized Group Policy Optimization). This framework leverages the CoT reasoning ability of LLMs to generate semantic IDs, enriching reasoning processes and improving information density and semantic quality of representations. Moreover, we design a reinforcement learning algorithm, Harmonized Group Policy Optimization (HGPO), to optimize negative sampling strategies and temperature coefficients in contrastive learning. This approach enhances long-tail recommendation performance and ensures optimization consistency across different groups. Experimental results on three datasets demonstrate that LGHRec improves representation quality through semantic IDs generated by LLM's CoT reasoning and effectively boosts contrastive learning with HGPO. Our method outperforms several baseline models. The code is available at: https://anonymous.4open.science/r/LLM-Rec.
翻译:图神经网络(GNNs)通过建模交互关系推动了推荐系统的发展。然而,现有的基于图的推荐器依赖于稀疏的ID特征,未能充分利用文本信息,导致表示中的信息密度较低。此外,图对比学习面临挑战:随机负采样可能引入假负样本,而固定的温度系数无法适应不同节点的异质性。同时,当前利用大型语言模型(LLMs)增强推荐的研究尚未充分利用其思维链(CoT)推理能力来指导表示学习。为解决这些局限性,我们提出了LGHRec(基于LLM-CoT增强的图神经推荐与协调组策略优化)。该框架利用LLMs的CoT推理能力生成语义ID,丰富推理过程,提升表示的信息密度和语义质量。此外,我们设计了一种强化学习算法——协调组策略优化(HGPO),用于优化对比学习中的负采样策略和温度系数。该方法增强了长尾推荐性能,并确保了不同组间优化的一致性。在三个数据集上的实验结果表明,LGHRec通过LLM的CoT推理生成的语义ID提高了表示质量,并借助HGPO有效提升了对比学习效果。我们的方法优于多个基线模型。代码可在以下网址获取:https://anonymous.4open.science/r/LLM-Rec。