Evaluating recommender systems remains a long-standing challenge, as offline methods based on historical user interactions and train-test splits often yield unstable and inconsistent results due to exposure bias, popularity bias, sampled evaluations, and missing-not-at-random patterns. In contrast, textual document retrieval benefits from robust, standardized evaluation via Cranfield-style test collections, which combine pooled relevance judgments with controlled setups. While recent work shows that adapting this methodology to recommender systems is feasible, constructing such collections remains costly due to the need for manual relevance judgments, thus limiting scalability. This paper investigates whether Large Language Models (LLMs) can serve as reliable automatic judges to address these scalability challenges. Using the ML-32M-ext Cranfield-style movie recommendation collection, we first examine the limitations of existing evaluation methodologies. Then we explore the alignment and the recommender systems ranking agreement between the LLM-judge and human provided relevance labels. We find that incorporating richer item metadata and longer user histories improves alignment, and that LLM-judge yields high agreement with human-based rankings (Kendall's tau = 0.87). Finally, an industrial case study in the podcast recommendation domain demonstrates the practical value of LLM-judge for model selection. Overall, our results show that LLM-judge is a viable and scalable approach for evaluating recommender systems.
翻译:推荐系统的评估仍是一个长期存在的挑战,因为基于历史用户交互和训练-测试分割的离线方法常因曝光偏差、流行度偏差、采样评估以及非随机缺失模式而产生不稳定且不一致的结果。相比之下,文本文档检索得益于通过Cranfield式测试集进行的稳健标准化评估,该方法结合了池化相关性判断与受控实验设置。尽管近期研究表明将这种方法应用于推荐系统是可行的,但由于需要人工相关性判断,构建此类测试集成本高昂,从而限制了可扩展性。本文探究大型语言模型(LLMs)能否作为可靠的自动评估器以应对这些可扩展性挑战。基于ML-32M-ext Cranfield式电影推荐测试集,我们首先分析了现有评估方法的局限性,随后探讨了LLM评估器与人工提供的相关性标签之间的一致性及其在推荐系统排序上的吻合度。研究发现,融入更丰富的物品元数据和更长的用户历史记录能提升一致性,且LLM评估器与基于人工的排序结果具有高度吻合性(肯德尔tau系数=0.87)。最后,在播客推荐领域的工业案例研究验证了LLM评估器在模型选择中的实用价值。总体而言,我们的结果表明LLM评估器是一种可行且可扩展的推荐系统评估方法。