This paper discusses basic results and recent developments on variational regularization methods, as developed for inverse problems. In a typical setup we review basic properties needed to obtain a convergent regularization scheme and further discuss the derivation of quantitative estimates respectively needed ingredients such as Bregman distances for convex functionals. In addition to the approach developed for inverse problems we will also discuss variational regularization in machine learning and work out some connections to the classical regularization theory. In particular we will discuss a reinterpretation of machine learning problems in the framework of regularization theory and a reinterpretation of variational methods for inverse problems in the framework of risk minimization. Moreover, we establish some previously unknown connections between error estimates in Bregman distances and generalization errors.
翻译:本文件讨论了为反向问题制定的变通正规化方法的基本结果和最新发展情况。在一个典型的结构中,我们审查获得统一正规化方案所需的基本特性,并进一步讨论分别需要的定量估计要素的计算方法,例如,Bregman对曲线功能的距离。除了为反向问题制定的方法外,我们还将讨论机械学习的变通正规化,并与典型的正规化理论建立某种联系。特别是,我们将讨论在正规化理论框架内重新解释机器学习问题,并重新解释在尽量减少风险框架内反向问题的各种变通方法。此外,我们将在布雷格曼距离的错误估计与一般化错误之间建立一些以前未知的联系。