Message-passing algorithms have been adapted for compressive imaging by incorporating various off-the-shelf image denoisers. However, these denoisers rely largely on generic or hand-crafted priors and often fall short in accurately capturing the complex statistical structure of natural images. As a result, traditional plug-and-play (PnP) methods often lead to suboptimal reconstruction, especially in highly underdetermined regimes. Recently, score-based generative models have emerged as a powerful framework for accurately characterizing sophisticated image distribution. Yet, their direct use for posterior sampling typically incurs prohibitive computational complexity. In this paper, by exploiting the close connection between score-based generative modeling and empirical Bayes denoising, we devise a message-passing framework that integrates a score-based minimum mean-squared error (MMSE) denoiser for compressive image recovery. The resulting algorithm, named score-based turbo message passing (STMP), combines the fast convergence of message passing with the expressive power of score-based generative priors. For practical systems with quantized measurements, we further propose quantized STMP (Q-STMP), which augments STMP with a component-wise MMSE dequantization module. We demonstrate that the asymptotic performance of STMP and Q-STMP can be accurately predicted by a set of state-evolution (SE) equations. Experiments on the FFHQ dataset demonstrate that STMP strikes a significantly better performance-complexity tradeoff compared with competing baselines, and that Q-STMP remains robust even under 1-bit quantization. Remarkably, both STMP and Q-STMP typically converge within 10 iterations.
翻译:消息传递算法通过整合多种现成的图像去噪器,已被应用于压缩成像。然而,这些去噪器主要依赖于通用或手工设计的先验,往往难以准确捕捉自然图像的复杂统计结构。因此,传统的即插即用(PnP)方法通常导致次优重建,尤其在高度欠定条件下。近年来,基于分数的生成模型已成为准确刻画复杂图像分布的强大框架。然而,直接将其用于后验采样通常带来过高的计算复杂度。本文通过利用基于分数的生成建模与经验贝叶斯去噪之间的紧密联系,设计了一种消息传递框架,该框架集成了基于分数的最小均方误差(MMSE)去噪器用于压缩图像恢复。所得算法称为基于分数的Turbo消息传递(STMP),结合了消息传递的快速收敛性与基于分数的生成先验的表达能力。对于具有量化测量的实际系统,我们进一步提出了量化STMP(Q-STMP),它在STMP基础上增加了分量级MMSE去量化模块。我们证明,STMP和Q-STMP的渐近性能可以通过一组状态演化(SE)方程准确预测。在FFHQ数据集上的实验表明,与竞争基线相比,STMP在性能与复杂度之间实现了显著更优的权衡,且Q-STMP即使在1比特量化下仍保持鲁棒性。值得注意的是,STMP和Q-STMP通常能在10次迭代内收敛。