Current novel view synthesis methods are typically designed for high-quality and clean input images. However, in foggy scenes, scattering and attenuation can significantly degrade the quality of rendering. Although NeRF-based dehazing approaches have been developed, their reliance on deep fully connected neural networks and per-ray sampling strategies leads to high computational costs. Furthermore, NeRF's implicit representation limits its ability to recover fine-grained details from hazy scenes. To overcome these limitations, we propose learning an explicit Gaussian representation to explain the formation mechanism of foggy images through a physically forward rendering process. Our method, DehazeGS, reconstructs and renders fog-free scenes using only multi-view foggy images as input. Specifically, based on the atmospheric scattering model, we simulate the formation of fog by establishing the transmission function directly onto Gaussian primitives via depth-to-transmission mapping. During training, we jointly learn the atmospheric light and scattering coefficients while optimizing the Gaussian representation of foggy scenes. At inference time, we remove the effects of scattering and attenuation in Gaussian distributions and directly render the scene to obtain dehazed views. Experiments on both real-world and synthetic foggy datasets demonstrate that DehazeGS achieves state-of-the-art performance. visualizations are available at https://dehazegs.github.io/
翻译:当前的新视角合成方法通常针对高质量且清晰的输入图像设计。然而,在雾霾场景中,散射与衰减效应会显著降低渲染质量。尽管已有基于神经辐射场(NeRF)的去雾方法被提出,但其对深度全连接神经网络及逐射线采样策略的依赖导致高昂的计算成本。此外,NeRF的隐式表示限制了其从雾霾场景中恢复细粒度细节的能力。为克服这些局限,我们提出通过物理前向渲染过程学习显式高斯表示,以解释雾霾图像的形成机制。我们的方法DehazeGS仅需多视角雾霾图像作为输入,即可重建并渲染无雾场景。具体而言,基于大气散射模型,我们通过深度-透射率映射将传输函数直接建立在高斯基元上,从而模拟雾的形成过程。在训练阶段,我们联合学习大气光与散射系数,同时优化雾霾场景的高斯表示。在推理阶段,我们消除高斯分布中的散射与衰减效应,并直接渲染场景以获得去雾视图。在真实世界与合成雾霾数据集上的实验表明,DehazeGS实现了最先进的性能。可视化结果请访问 https://dehazegs.github.io/