Block-Matching and 3D Filtering (BM3D) exploits non-local self-similarity priors for denoising but relies on fixed parameters. Deep models such as U-Net are more flexible but often lack interpretability and fail to generalize across noise regimes. In this study, we propose Deep Unfolded BM3D (DU-BM3D), a hybrid framework that unrolls BM3D into a trainable architecture by replacing its fixed collaborative filtering with a learnable U-Net denoiser. This preserves BM3D's non-local structural prior while enabling end-to-end optimization. We evaluate DU-BM3D on low-dose CT (LDCT) denoising and show that it outperforms classic BM3D and standalone U-Net across simulated LDCT at different noise levels, yielding higher PSNR and SSIM, especially in high-noise conditions.
翻译:块匹配与三维滤波(BM3D)利用非局部自相似性先验进行去噪,但依赖固定参数。U-Net等深度模型更具灵活性,但通常缺乏可解释性,且难以在不同噪声机制间泛化。本研究提出深度展开BM3D(DU-BM3D),这是一种混合框架,通过将BM3D的固定协同滤波替换为可学习的U-Net去噪器,将其展开为可训练架构。该方法保留了BM3D的非局部结构先验,同时支持端到端优化。我们在低剂量CT(LDCT)去噪任务上评估DU-BM3D,结果表明在不同噪声水平的模拟LDCT数据中,其性能均优于经典BM3D与独立U-Net,尤其在强噪声条件下,可获得更高的峰值信噪比(PSNR)与结构相似性指数(SSIM)。