Ultra-wideband (UWB)-vision fusion localization has achieved extensive applications in the domain of multi-agent relative localization. The challenging matching problem between robots and visual detection renders existing methods highly dependent on identity-encoded hardware or delicate tuning algorithms. Overconfident yet erroneous matches may bring about irreversible damage to the localization system. To address this issue, we introduce Mr. Virgil, an end-to-end learning multi-robot visual-range relative localization framework, consisting of a graph neural network for data association between UWB rangings and visual detections, and a differentiable pose graph optimization (PGO) back-end. The graph-based front-end supplies robust matching results, accurate initial position predictions, and credible uncertainty estimates, which are subsequently integrated into the PGO back-end to elevate the accuracy of the final pose estimation. Additionally, a decentralized system is implemented for real-world applications. Experiments spanning varying robot numbers, simulation and real-world, occlusion and non-occlusion conditions showcase the stability and exactitude under various scenes compared to conventional methods. Our code is available at: https://github.com/HiOnes/Mr-Virgil.
翻译:超宽带(UWB)与视觉融合定位技术在多智能体相对定位领域已获得广泛应用。机器人身份与视觉检测间的匹配难题使得现有方法高度依赖于身份编码硬件或精细调参算法。过度自信但错误的匹配可能对定位系统造成不可逆损害。为解决此问题,我们提出Mr. Virgil——一种端到端学习型多机器人视觉-测距相对定位框架,包含用于UWB测距与视觉检测数据关联的图神经网络,以及可微分位姿图优化(PGO)后端。基于图结构的前端提供鲁棒的匹配结果、精确的初始位置预测及可靠的不确定性估计,这些信息随后被整合至PGO后端以提升最终位姿估计的精度。此外,本研究实现了适用于实际应用的去中心化系统。通过在机器人数量可变、仿真与真实环境、遮挡与非遮挡条件下的实验,本方法相较于传统方案在不同场景中均展现出稳定性与精确性。代码开源地址:https://github.com/HiOnes/Mr-Virgil。