3D semantic segmentation is essential for autonomous driving and road infrastructure analysis, but state-of-the-art 3D models suffer from severe domain shift when applied across datasets. We propose a multi-view projection framework for unsupervised domain adaptation (UDA). Our method aligns LiDAR scans into coherent 3D scenes and renders them from multiple virtual camera poses to generate large-scale synthetic 2D datasets (PC2D) in various modalities. An ensemble of 2D segmentation models is trained on these modalities, and during inference, hundreds of views per scene are processed, with logits back-projected to 3D using an occlusion-aware voting scheme to produce point-wise labels. These labels can be used directly or to fine-tune a 3D segmentation model in the target domain. We evaluate our approach in both Real-to-Real and Simulation-to-Real UDA, achieving state-of-the-art performance in the Real-to-Real setting. Furthermore, we show that our framework enables segmentation of rare classes, leveraging only 2D annotations for those classes while relying on 3D annotations for others in the source domain.
翻译:暂无翻译