We propose Observer Actor (ObAct), a novel framework for active vision imitation learning in which the observer moves to optimal visual observations for the actor. We study ObAct on a dual-arm robotic system equipped with wrist-mounted cameras. At test time, ObAct dynamically assigns observer and actor roles: the observer arm constructs a 3D Gaussian Splatting (3DGS) representation from three images, virtually explores this to find an optimal camera pose, then moves to this pose; the actor arm then executes a policy using the observer's observations. This formulation enhances the clarity and visibility of both the object and the gripper in the policy's observations. As a result, we enable the training of ambidextrous policies on observations that remain closer to the occlusion-free training distribution, leading to more robust policies. We study this formulation with two existing imitation learning methods -- trajectory transfer and behavior cloning -- and experiments show that ObAct significantly outperforms static-camera setups: trajectory transfer improves by 145% without occlusion and 233% with occlusion, while behavior cloning improves by 75% and 143%, respectively. Videos are available at https://obact.github.io.
翻译:我们提出了观察者-执行者(ObAct)这一新颖框架,用于主动视觉模仿学习,其中观察者移动至最优视觉观测位置以服务于执行者。我们在配备腕部相机的双臂机器人系统上对ObAct进行了研究。在测试阶段,ObAct动态分配观察者与执行者角色:观察者机械臂从三幅图像构建三维高斯溅射(3DGS)表示,通过虚拟探索该表示以寻找最优相机位姿,随后移动至该位姿;执行者机械臂则利用观察者的观测执行策略。这一方法增强了策略观测中物体与夹爪的清晰度与可见性。因此,我们能够在更接近无遮挡训练分布的观测上训练双手协调策略,从而获得更鲁棒的策略。我们使用两种现有模仿学习方法——轨迹迁移与行为克隆——对该框架进行了验证,实验表明ObAct显著优于静态相机配置:轨迹迁移在无遮挡情况下性能提升145%,有遮挡时提升233%;行为克隆则分别提升75%与143%。演示视频详见 https://obact.github.io。