Many real-world bandit problems involve non-stationary reward distributions, where the optimal decision may shift due to evolving environments. However, the performance of some typical Multi-Armed Bandit (MAB) models such as Upper Confidence Bound (UCB) algorithms degrades significantly in non-stationary environments where reward distributions change over time. To address this limitation, this paper introduces and evaluates FDSW-UCB, a novel dual-view algorithm that integrates a discount-based long-term perspective with a sliding-window-based short-term view. A data-driven semi-synthetic simulation platform, built upon the MovieLens-1M and Open Bandit datasets, is developed to test algorithm adaptability under abrupt and gradual drift scenarios. Experimental results demonstrate that a well-configured sliding-window mechanism (SW-UCB) is robust, while the widely used discounting method (D-UCB) suffers from a fundamental learning failure, leading to linear regret. Crucially, the proposed FDSW-UCB, when employing an optimistic aggregation strategy, achieves superior performance in dynamic settings, highlighting that the ensemble strategy itself is a decisive factor for success.
翻译:许多现实世界中的赌博机问题涉及非平稳奖励分布,其中最优决策可能因环境演化而发生偏移。然而,在奖励分布随时间变化的非平稳环境中,诸如上置信界(UCB)算法等典型多臂赌博机(MAB)模型的性能会显著下降。为应对这一局限,本文提出并评估了FDSW-UCB——一种融合基于折扣的长期视角与基于滑动窗口的短期视角的新型双视图算法。基于MovieLens-1M和Open Bandit数据集构建的数据驱动半合成仿真平台,用于测试算法在突变与渐进漂移场景下的适应性。实验结果表明,配置良好的滑动窗口机制(SW-UCB)具有鲁棒性,而广泛使用的折扣方法(D-UCB)存在根本性学习缺陷,导致线性遗憾。关键的是,当采用乐观聚合策略时,所提出的FDSW-UCB在动态环境中实现了更优性能,这凸显了集成策略本身是决定成功的关键因素。