Sensor fusion of camera, LiDAR, and 4-dimensional (4D) Radar has brought a significant performance improvement in autonomous driving. However, there still exist fundamental challenges: deeply coupled fusion methods assume continuous sensor availability, making them vulnerable to sensor degradation and failure, whereas sensor-wise cross-attention fusion methods struggle with computational cost and unified feature representation. This paper presents availability-aware sensor fusion (ASF), a novel method that employs unified canonical projection (UCP) to enable consistency in all sensor features for fusion and cross-attention across sensors along patches (CASAP) to enhance robustness of sensor fusion against sensor degradation and failure. As a result, the proposed ASF shows a superior object detection performance to the existing state-of-the-art fusion methods under various weather and sensor degradation (or failure) conditions. Extensive experiments on the K-Radar dataset demonstrate that ASF achieves improvements of 9.7% in AP BEV (87.2%) and 20.1% in AP 3D (73.6%) in object detection at IoU=0.5, while requiring a low computational cost. All codes are available at https://github.com/kaist-avelab/k-radar.
翻译:相机、激光雷达与四维(4D)雷达的传感器融合为自动驾驶带来了显著的性能提升。然而,仍存在根本性挑战:深度耦合的融合方法假设传感器持续可用,使其易受传感器性能退化与故障的影响;而基于传感器间交叉注意力的融合方法则面临计算成本与统一特征表示的困难。本文提出可用性感知传感器融合(ASF),这是一种新颖的方法,采用统一规范投影(UCP)确保所有传感器特征在融合时的一致性,并利用跨传感器的补丁级交叉注意力(CASAP)增强传感器融合对传感器退化与故障的鲁棒性。实验结果表明,所提出的ASF在各种天气及传感器退化(或故障)条件下,其目标检测性能均优于现有最先进的融合方法。在K-Radar数据集上的大量实验表明,ASF在IoU=0.5的目标检测中,AP BEV(87.2%)提升了9.7%,AP 3D(73.6%)提升了20.1%,同时保持了较低的计算成本。所有代码公开于https://github.com/kaist-avelab/k-radar。