Recent advances in recommender systems have proved the potential of Reinforcement Learning (RL) to handle the dynamic evolution processes between users and recommender systems. However, learning to train an optimal RL agent is generally impractical with commonly sparse user feedback data in the context of recommender systems. To circumvent the lack of interaction of current RL-based recommender systems, we propose to learn a general Model-agnostic Counterfactual Synthesis Policy for counterfactual user interaction data augmentation. The counterfactual synthesis policy aims to synthesise counterfactual states while preserving significant information in the original state relevant to the user's interests, building upon two different training approaches we designed: learning with expert demonstrations and joint training. As a result, the synthesis of each counterfactual data is based on the current recommendation agent interaction with the environment to adapt to users' dynamic interests. We integrate the proposed policy Deep Deterministic Policy Gradient (DDPG), Soft Actor Critic (SAC) and Twin Delayed DDPG in an adaptive pipeline with a recommendation agent that can generate counterfactual data to improve the performance of recommendation. The empirical results on both online simulation and offline datasets demonstrate the effectiveness and generalisation of our counterfactual synthesis policy and verify that it improves the performance of RL recommendation agents.
翻译:近些年来,建议者系统的进展证明加强学习(RL)具有处理用户与建议者系统之间动态演变过程的潜力。然而,学习培训最佳RL代理机构在推荐者系统中通常不切实际,因为通常用户反馈数据很少。为避免目前基于RL的建议者系统缺乏互动,我们提议学习一个用于反事实用户互动数据增强的通用模型-诊断性反事实综合政策。反事实综合政策的目的是综合反事实状态,同时保留最初状态中与用户利益相关的重要信息,同时以我们设计的两种不同的培训方法为基础:学习专家演示和联合培训。结果,每种反事实数据的合成都基于目前建议代理与环境的互动,以适应用户动态利益。我们建议者将拟议的政策深度威慑政策梯度(DPGE)、Soft Actor Critic (SAC)和双延迟DDPG纳入适应性管道,与能够产生反事实数据以改进建议执行情况的建议代理机构。关于在线模拟和离线数据整合的实证结果,以证明我们的现实化政策的一般执行情况。