We propose aNETT (augmented NETwork Tikhonov) regularization as a novel data-driven reconstruction framework for solving inverse problems. An encoder-decoder type network defines a regularizer consisting of a penalty term that enforces regularity in the encoder domain, augmented by a penalty that penalizes the distance to the data manifold. We present a rigorous convergence analysis including stability estimates and convergence rates. For that purpose, we prove the coercivity of the regularizer used without requiring explicit coercivity assumptions for the networks involved. We propose a possible realization together with a network architecture and a modular training strategy. Applications to sparse-view and low-dose CT show that aNETT achieves results comparable to state-of-the-art deep-learning-based reconstruction methods. Unlike learned iterative methods, aNETT does not require repeated application of the forward and adjoint models, which enables the use of aNETT for inverse problems with numerically expensive forward models. Furthermore, we show that aNETT trained on coarsely sampled data can leverage an increased sampling rate without the need for retraining.
翻译:我们提议将ANET(强化的NETwork Tikhonov)正规化为新的数据驱动重建框架,以解决反向问题。一个编码器-编码器型网络定义了常规化器,包括一个在编码器域内执行常规性的处罚术语,辅之以惩罚与数据元数之间的距离。我们提出了严格的趋同分析,包括稳定性估计和趋同率。为此,我们证明所使用的常规化器具有共通性,而不需要对所涉网络进行明确的共振假设。我们提出了与网络架构和模块化培训战略一起实现的可能性。对稀释式和低剂量CT的应用表明,ANET取得了与最先进的深层次重建方法相类似的效果。与所学的迭代方法不同,ANETT并不需要重复使用前期和协作模型,这些模型可以使NETT用于与数字昂贵的远期模型的反向问题。此外,我们表明,在粗略抽样数据方面受过训练的NETT可以利用更高的取样率,而无需再培训。