Multivariable time series forecasting methods can integrate information from exogenous variables, leading to significant prediction accuracy gains. The transformer architecture has been widely applied in various time series forecasting models due to its ability to capture long-range sequential dependencies. However, a naïve application of transformers often struggles to effectively model complex relationships among variables over time. To mitigate against this, we propose a novel architecture, termed Spectral Operator Neural Network (Sonnet). Sonnet applies learnable wavelet transformations to the input and incorporates spectral analysis using the Koopman operator. Its predictive skill relies on the Multivariable Coherence Attention (MVCA), an operation that leverages spectral coherence to model variable dependencies. Our empirical analysis shows that Sonnet yields the best performance on $34$ out of $47$ forecasting tasks with an average mean absolute error (MAE) reduction of $2.2\%$ against the most competitive baseline. We further show that MVCA can remedy the deficiencies of naïve attention in various deep learning models, reducing MAE by $10.7\%$ on average in the most challenging forecasting tasks.
翻译:多变量时间序列预测方法能够整合外生变量的信息,从而显著提升预测精度。Transformer架构因其捕捉长程序列依赖的能力,已被广泛应用于各类时间序列预测模型。然而,对Transformer的简单应用往往难以有效建模变量间随时间变化的复杂关系。为缓解这一问题,我们提出了一种新颖的架构,称为谱算子神经网络(Sonnet)。Sonnet对输入应用可学习的小波变换,并结合使用Koopman算子进行谱分析。其预测能力依赖于多变量相干注意力(MVCA),该操作利用谱相干性来建模变量间的依赖关系。我们的实证分析表明,在47项预测任务中的34项上,Sonnet取得了最佳性能,相较于最具竞争力的基线模型,其平均绝对误差(MAE)平均降低了2.2%。我们进一步证明,MVCA能够弥补各类深度学习模型中简单注意力机制的不足,在最富挑战性的预测任务中,平均将MAE降低了10.7%。