Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.
翻译:深加学习( RL) 在困难的决策问题中取得了一些高知名度的成功。 然而, 这些算法通常需要大量的数据才能达到合理的性能。 事实上, 它们学习期间的性能可能极差。 对于模拟器来说, 这也许可以接受, 但严重限制了深度RL对许多真实世界任务的适用性, 代理器必须在真实环境中学习。 在本文中, 我们研究一个代理器可以从系统先前的控制中获取数据的设置。 我们提出了一个算法, 从演示中深Q学习( DQfD), 利用少量的演示数据来大大加速学习过程。 事实上, 他们的学习表现可能非常差, 但它会严重限制深RL对许多真实世界任务的适用性, 代理器必须在真实环境中学习。 我们的研究环境是, DQf DQD 的初始性, 因为它在42个游戏的第一步骤上比分得分得更好, 在42个DQ的演示程中, 将DD- DQ的成绩比 平均成绩学到 DDQ。