In this paper, we present a user-friendly LiDAR-camera calibration toolkit that is compatible with various LiDAR and camera sensors and requires only a single pair of laser points and a camera image in targetless environments. Our approach eliminates the need for an initial transform and remains robust even with large positional and rotational LiDAR-camera extrinsic parameters. We employ the Gluestick pipeline to establish 2D-3D point and line feature correspondences for a robust and automatic initial guess. To enhance accuracy, we quantitatively analyze the impact of feature distribution on calibration results and adaptively weight the cost of each feature based on these metrics. As a result, extrinsic parameters are optimized by filtering out the adverse effects of inferior features. We validated our method through extensive experiments across various LiDAR-camera sensors in both indoor and outdoor settings. The results demonstrate that our method provides superior robustness and accuracy compared to SOTA techniques. Our code is open-sourced on GitHub to benefit the community.
翻译:本文提出了一种用户友好的激光雷达-相机标定工具包,该工具包兼容多种激光雷达与相机传感器,在无目标环境中仅需单对激光点云与相机图像即可完成标定。我们的方法无需初始变换,即使在激光雷达与相机外参存在较大位置与旋转偏差时仍保持鲁棒性。我们采用Gluestick流程建立二维-三维点与线特征对应关系,以实现鲁棒且自动的初始估计。为提升精度,我们定量分析了特征分布对标定结果的影响,并基于这些指标自适应地加权各特征的成本函数。通过滤除劣质特征的不利影响,外参得以优化。我们在室内外多种激光雷达-相机传感器配置下进行了大量实验验证。结果表明,相较于现有先进技术,本方法具有更优的鲁棒性与精度。代码已在GitHub开源以惠及学术界。