The double descent (DD) paradox, where over-parameterized models see generalization improve past the interpolation point, remains largely unexplored in the non-stationary domain of Deep Reinforcement Learning (DRL). We present preliminary evidence that DD exists in model-free DRL, investigating it systematically across varying model capacity using the Actor-Critic framework. We rely on an information-theoretic metric, Policy Entropy, to measure policy uncertainty throughout training. Preliminary results show a clear epoch-wise DD curve; the policy's entrance into the second descent region correlates with a sustained, significant reduction in Policy Entropy. This entropic decay suggests that over-parameterization acts as an implicit regularizer, guiding the policy towards robust, flatter minima in the loss landscape. These findings establish DD as a factor in DRL and provide an information-based mechanism for designing agents that are more general, transferable, and robust.
翻译:双下降(DD)悖论描述了过参数化模型在越过插值点后泛化性能反而提升的现象,这一现象在深度强化学习(DRL)的非平稳领域中尚未得到充分探索。我们基于行动者-评论家框架,通过系统性地改变模型容量,提出了模型无关DRL中存在DD的初步证据。我们采用信息论度量指标——策略熵,来量化训练过程中的策略不确定性。初步结果显示清晰的逐轮次DD曲线:策略进入第二下降区域与策略熵持续且显著的降低具有相关性。这种熵衰减表明,过参数化起到了隐式正则化的作用,引导策略向损失函数景观中更鲁棒、更平坦的极小值收敛。这些发现确立了DD作为DRL中的一个影响因素,并为设计更具泛化性、可迁移性和鲁棒性的智能体提供了一种基于信息的机制。