Self-supervised learning for inverse problems allows to train a reconstruction network from noise and/or incomplete data alone. These methods have the potential of enabling learning-based solutions when obtaining ground-truth references for training is expensive or even impossible. In this paper, we propose a new self-supervised learning strategy devised for the challenging setting where measurements are observed via a single incomplete observation model. We introduce a new definition of equivariance in the context of reconstruction networks, and show that the combination of self-supervised splitting losses and equivariant reconstruction networks results in unbiased estimates of the supervised loss. Through a series of experiments on image inpainting, accelerated magnetic resonance imaging, sparse-view computed tomography, and compressive sensing, we demonstrate that the proposed loss achieves state-of-the-art performance in settings with highly rank-deficient forward models. The code is available at https://github.com/vsechaud/Equivariant-Splitting
翻译:针对逆问题的自监督学习使得仅从噪声和/或不完整数据中训练重建网络成为可能。这些方法在获取用于训练的真实参考数据成本高昂甚至无法获得时,具备实现基于学习的解决方案的潜力。本文提出了一种新的自监督学习策略,专为通过单一不完整观测模型获取测量数据的挑战性场景而设计。我们引入了重建网络背景下等变性的新定义,并证明了自监督分裂损失与等变重建网络的结合能够产生监督损失的无偏估计。通过在图像修复、加速磁共振成像、稀疏视角计算机断层扫描以及压缩感知等一系列实验,我们证明了所提出的损失函数在前向模型高度秩亏缺的设置下达到了最先进的性能。代码可在 https://github.com/vsechaud/Equivariant-Splitting 获取。