We study nonparametric regression by an over-parameterized two-layer neural network trained by gradient descent (GD) in this paper. We show that, if the neural network is trained by GD with early stopping, then the trained network renders a sharp rate of the nonparametric regression risk of $\mathcal{O}(\epsilon_n^2)$, which is the same rate as that for the classical kernel regression trained by GD with early stopping, where $\epsilon_n$ is the critical population rate of the Neural Tangent Kernel (NTK) associated with the network and $n$ is the size of the training data. It is remarked that our result does not require distributional assumptions about the covariate as long as the covariate is bounded, in a strong contrast with many existing results which rely on specific distributions of the covariates such as the spherical uniform data distribution or distributions satisfying certain restrictive conditions. The rate $\mathcal{O}(\epsilon_n^2)$ is known to be minimax optimal for specific cases, such as the case that the NTK has a polynomial eigenvalue decay rate which happens under certain distributional assumptions on the covariates. Our result formally fills the gap between training a classical kernel regression model and training an over-parameterized but finite-width neural network by GD for nonparametric regression without distributional assumptions on the bounded covariate. We also provide confirmative answers to certain open questions or address particular concerns in the literature of training over-parameterized neural networks by GD with early stopping for nonparametric regression, including the characterization of the stopping time, the lower bound for the network width, and the constant learning rate used in GD.
翻译:本文研究了通过梯度下降法训练的过参数化双层神经网络进行非参数回归。我们证明,若神经网络采用早期停止的梯度下降法进行训练,则训练后的网络可实现非参数回归风险的锐利速率 $\mathcal{O}(\epsilon_n^2)$,该速率与采用早期停止梯度下降法训练的经典核回归的速率相同,其中 $\epsilon_n$ 为网络对应的神经正切核的临界总体速率,$n$ 为训练数据规模。需要指出的是,只要协变量有界,我们的结果无需对协变量作分布假设,这与许多现有结果形成鲜明对比——这些结果依赖于协变量的特定分布(如球面均匀数据分布或满足特定限制条件的分布)。已知速率 $\mathcal{O}(\epsilon_n^2)$ 在特定情况下是最小最大最优的,例如神经正切核具有多项式特征值衰减率的情况(这在某些协变量分布假设下发生)。我们的结果正式填补了在无分布假设的有界协变量条件下,训练经典核回归模型与训练过参数化但有限宽度神经网络用于非参数回归之间的理论空白。我们还对文献中关于采用早期停止梯度下降法训练过参数化神经网络进行非参数回归的若干开放问题或特定关切给出了肯定性解答,包括停止时间的刻画、网络宽度的下界以及梯度下降法中使用的恒定学习率。