Recently, neural networks have proven their impressive ability to solve partial differential equations (PDEs). Among them, Fourier neural operator (FNO) has shown success in learning solution operators for highly non-linear problems such as turbulence flow. FNO is discretization-invariant, where it can be trained on low-resolution data and generalizes to problems with high-resolution. This property is related to the low-pass filters in FNO, where only a limited number of frequency modes are selected to propagate information. However, it is still a challenge to select an appropriate number of frequency modes and training resolution for different PDEs. Too few frequency modes and low-resolution data hurt generalization, while too many frequency modes and high-resolution data are computationally expensive and lead to over-fitting. To this end, we propose Incremental Fourier Neural Operator (IFNO), which augments both the frequency modes and data resolution incrementally during training. We show that IFNO achieves better generalization (around 15% reduction on testing L2 loss) while reducing the computational cost by 35%, compared to the standard FNO. In addition, we observe that IFNO follows the behavior of implicit regularization in FNO, which explains its excellent generalization ability.
翻译:最近,神经网络已经证明它们解决部分差异方程式(PDEs)的能力令人印象深刻。其中,Fourier神经操作员(FNO)在对高度非线性问题(如动荡流)的学习解决方案操作员方面表现出成功。FNO是分解变量,可以接受低分辨率数据培训,并概括高分辨率问题。这一属性与FNO的低通路过滤器有关,该过滤器选择的传播信息的频率模式有限。然而,为不同的PDE选择适当数量的频率模式和培训解决方案仍是一项挑战。频率模式和低分辨率数据太少,伤害了一般化,而太多的频率模式和高分辨率数据在计算上费用昂贵,导致过度使用。为此,我们提议增加Fourier神经操作员(IFNO),这在培训期间会增加频率模式和数据分辨率。我们表明,FNO的普及率(测试L2损失率减少15%左右)比标准FNO降低35 %,同时将计算成本降低35 %。此外,我们观察IFNO的隐性行为规范。