Standard practice across domains from robotics to language is to first pretrain a policy on a large-scale demonstration dataset, and then finetune this policy, typically with reinforcement learning (RL), in order to improve performance on deployment domains. This finetuning step has proved critical in achieving human or super-human performance, yet while much attention has been given to developing more effective finetuning algorithms, little attention has been given to ensuring the pretrained policy is an effective initialization for RL finetuning. In this work we seek to understand how the pretrained policy affects finetuning performance, and how to pretrain policies in order to ensure they are effective initializations for finetuning. We first show theoretically that standard behavioral cloning (BC) -- which trains a policy to directly match the actions played by the demonstrator -- can fail to ensure coverage over the demonstrator's actions, a minimal condition necessary for effective RL finetuning. We then show that if, instead of exactly fitting the observed demonstrations, we train a policy to model the posterior distribution of the demonstrator's behavior given the demonstration dataset, we do obtain a policy that ensures coverage over the demonstrator's actions, enabling more effective finetuning. Furthermore, this policy -- which we refer to as the posterior behavioral cloning (PostBC) policy -- achieves this while ensuring pretrained performance is no worse than that of the BC policy. We then show that PostBC is practically implementable with modern generative models in robotic control domains -- relying only on standard supervised learning -- and leads to significantly improved RL finetuning performance on both realistic robotic control benchmarks and real-world robotic manipulation tasks, as compared to standard behavioral cloning.


翻译:从机器人学到语言处理等领域的标准实践是:首先在大规模演示数据集上预训练一个策略,然后(通常通过强化学习(RL))对该策略进行微调,以提升在部署领域上的性能。这一微调步骤已被证明对实现人类或超人类水平性能至关重要,然而尽管已有大量研究致力于开发更有效的微调算法,却鲜有关注如何确保预训练策略能成为强化学习微调的有效初始化起点。在本工作中,我们旨在理解预训练策略如何影响微调性能,以及如何预训练策略以确保其成为微调的有效初始化。我们首先从理论上证明,标准行为克隆(BC)——即训练策略直接模仿演示者执行的动作——可能无法确保对演示者动作的覆盖,而这是实现有效强化学习微调的必要最低条件。随后我们证明,如果不再精确拟合观测到的演示数据,而是训练一个策略来建模给定演示数据集下演示者行为的后验分布,我们确实能获得一个确保覆盖演示者动作的策略,从而实现更有效的微调。此外,这一策略——我们称之为后验行为克隆(PostBC)策略——在实现这一目标的同时,确保预训练性能不低于BC策略。最后我们证明,PostBC可通过现代生成模型在机器人控制领域中实际实现(仅依赖标准监督学习),并且在现实的机器人控制基准测试和真实世界机器人操作任务中,相比标准行为克隆,能显著提升强化学习微调性能。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员