Constructing the architecture of a neural network is a challenging pursuit for the machine learning community, and the dilemma of whether to go deeper or wider remains a persistent question. This paper explores a comparison between deeper neural networks (DeNNs) with a flexible number of layers and wider neural networks (WeNNs) with limited hidden layers, focusing on their optimal generalization error in Sobolev losses. Analytical investigations reveal that the architecture of a neural network can be significantly influenced by various factors, including the number of sample points, parameters within the neural networks, and the regularity of the loss function. Specifically, a higher number of parameters tends to favor WeNNs, while an increased number of sample points and greater regularity in the loss function lean towards the adoption of DeNNs. We ultimately apply this theory to address partial differential equations using deep Ritz and physics-informed neural network (PINN) methods, guiding the design of neural networks.
翻译:构建神经网络架构是机器学习领域的一项挑战性课题,而选择更深还是更宽的网络结构仍是一个持续存在的问题。本文比较了具有灵活层数的深层神经网络(DeNNs)与隐藏层数有限但宽度更大的神经网络(WeNNs),重点关注它们在Sobolev损失下的最优泛化误差。分析研究表明,神经网络架构受多种因素显著影响,包括样本点数量、网络参数量以及损失函数的正则性。具体而言,参数量较多时更倾向于使用WeNNs,而样本点数量增加及损失函数正则性增强时则更倾向于采用DeNNs。我们最终将该理论应用于深度Ritz方法和物理信息神经网络(PINN)求解偏微分方程的问题中,以指导神经网络的设计。