Nonprehensile manipulation, such as pushing objects across cluttered environments, presents a challenging control problem due to complex contact dynamics and long-horizon planning requirements. In this work, we propose HeRD, a hierarchical reinforcement learning-diffusion policy that decomposes pushing tasks into two levels: high-level goal selection and low-level trajectory generation. We employ a high-level reinforcement learning (RL) agent to select intermediate spatial goals, and a low-level goal-conditioned diffusion model to generate feasible, efficient trajectories to reach them. This architecture combines the long-term reward maximizing behaviour of RL with the generative capabilities of diffusion models. We evaluate our method in a 2D simulation environment and show that it outperforms the state-of-the-art baseline in success rate, path efficiency, and generalization across multiple environment configurations. Our results suggest that hierarchical control with generative low-level planning is a promising direction for scalable, goal-directed nonprehensile manipulation. Code, documentation, and trained models are available: https://github.com/carosteven/HeRD.
翻译:非抓取操作,如在杂乱环境中推动物体,由于复杂的接触动力学和长时域规划需求,构成了一个具有挑战性的控制问题。在本研究中,我们提出HeRD,一种层次化强化学习-扩散策略,将推动任务分解为两个层级:高层目标选择与低层轨迹生成。我们采用高层强化学习(RL)智能体选择中间空间目标,并使用低层目标条件扩散模型生成可行且高效的轨迹以达成这些目标。该架构结合了强化学习的长期奖励最大化行为与扩散模型的生成能力。我们在二维仿真环境中评估了该方法,结果表明其在成功率、路径效率及跨多种环境配置的泛化能力方面均优于当前最先进的基线方法。我们的研究结果表明,结合生成式低层规划的层次化控制是实现可扩展、目标导向的非抓取操作的一个有前景的方向。代码、文档及训练模型已公开:https://github.com/carosteven/HeRD。