Visual uncertainties such as occlusions, lack of texture, and noise present significant challenges in obtaining accurate kinematic models for safe robotic manipulation. We introduce a probabilistic real-time approach that leverages the human hand as a prior to mitigate these uncertainties. By tracking the constrained motion of the human hand during manipulation and explicitly modeling uncertainties in visual observations, our method reliably estimates an object's kinematic model online. We validate our approach on a novel dataset featuring challenging objects that are occluded during manipulation and offer limited articulations for perception. The results demonstrate that by incorporating an appropriate prior and explicitly accounting for uncertainties, our method produces accurate estimates, outperforming two recent baselines by 195% and 140%, respectively. Furthermore, we demonstrate that our approach's estimates are precise enough to allow a robot to manipulate even small objects safely.
翻译:视觉不确定性(如遮挡、纹理缺失和噪声)对获取精确的关节模型以实现安全的机器人操作构成了显著挑战。本文提出一种概率实时方法,利用人手作为先验信息以缓解这些不确定性。通过跟踪操作过程中人手受约束的运动,并显式建模视觉观测中的不确定性,我们的方法能够在线可靠地估计物体的关节模型。我们在一个新颖的数据集上验证了该方法,该数据集包含在操作过程中被遮挡且感知关节运动受限的挑战性物体。结果表明,通过引入适当的先验并显式处理不确定性,我们的方法能生成精确的估计,分别优于两个近期基线195%和140%。此外,我们证明该方法估计的精度足以支持机器人安全操作小型物体。