It is counter-intuitive that multi-modality methods based on point cloud and images perform only marginally better or sometimes worse than approaches that solely use point cloud. This paper investigates the reason behind this phenomenon. Due to the fact that multi-modality data augmentation must maintain consistency between point cloud and images, recent methods in this field typically use relatively insufficient data augmentation. This shortage makes their performance under expectation. Therefore, we contribute a pipeline, named transformation flow, to bridge the gap between single and multi-modality data augmentation with transformation reversing and replaying. In addition, considering occlusions, a point in different modalities may be occupied by different objects, making augmentations such as cut and paste non-trivial for multi-modality detection. We further present Multi-mOdality Cut and pAste (MoCa), which simultaneously considers occlusion and physical plausibility to maintain the multi-modality consistency. Without using ensemble of detectors, our multi-modality detector achieves new state-of-the-art performance on nuScenes dataset and competitive performance on KITTI 3D benchmark. Our method also wins the best PKL award in the 3rd nuScenes detection challenge. Code and models will be released at https://github.com/open-mmlab/mmdetection3d.
翻译:基于点云和图像的多模式方法只比仅使用点云的方法好一点,有时甚至更差一点,这是反直觉的,因为基于点云和图像的多模式方法,基于点云和图像的多模式数据扩增必须保持多模式数据与图像之间的一致性,最近在这一领域的方法通常使用相对不足的数据扩增。这种短缺使得其性能低于预期。因此,我们提供了一条管道,命名为变换流和重播,以弥合单一和多模式数据增强之间的鸿沟。此外,考虑到隐蔽性,不同模式中的一个点可能会被不同对象占据,使诸如多模式检测的切分和粘贴非三角的增益。我们进一步介绍了多模式的增益和 pAste(MoCa),同时考虑包容性和物理的可信赖性,以保持多模式一致性。我们多模式探测器探测器的检测和重现新状态,不同方式的运行方式可以被不同对象所占据,使多模式的增量成为多模式3号数据库的升级测试。我们在3号测试中,也将在3号测试中赢得最佳标准。