Graph Neural Networks (GNNs) with equivariant properties have achieved significant success in modeling complex dynamic systems and molecular properties. However, their expressiveness ability is limited by: (1) Existing methods often overlook the over-smoothing issue caused by traditional GNN models, as well as the gradient explosion or vanishing problems in deep GNNs. (2) Most models operate on first-order information, neglecting that the real world often consists of second-order systems, which further limits the model's representation capabilities. To address these issues, we propose the \textbf{Du}al \textbf{S}econd-order \textbf{E}quivariant \textbf{G}raph \textbf{O}rdinary Differential Equation (\method{}) for equivariant representation. Specifically, \method{} apply the dual second-order equivariant graph ordinary differential equations (Graph ODEs) on graph embeddings and node coordinates, simultaneously. Theoretically, we first prove that \method{} maintains the equivariant property. Furthermore, we provide theoretical insights showing that \method{} effectively alleviates the over-smoothing problem in both feature representation and coordinate update. Additionally, we demonstrate that the proposed \method{} mitigates the exploding and vanishing gradients problem, facilitating the training of deep multi-layer GNNs. Extensive experiments on benchmark datasets validate the superiority of the proposed \method{} compared to baselines.
翻译:具有等变性质的图神经网络(GNNs)在建模复杂动态系统和分子性质方面取得了显著成功。然而,其表达能力受到以下限制:(1)现有方法常忽略传统GNN模型导致的过平滑问题,以及深层GNN中的梯度爆炸或消失问题。(2)大多数模型仅处理一阶信息,而现实世界往往由二阶系统构成,这进一步限制了模型的表示能力。为解决这些问题,我们提出了用于等变表示的\\textbf{双二阶等变图常微分方程}(\\method{})。具体而言,\\method{}同时在图嵌入和节点坐标上应用双二阶等变图常微分方程(Graph ODEs)。理论上,我们首先证明\\method{}保持等变性质。此外,我们通过理论分析表明,\\method{}能有效缓解特征表示和坐标更新中的过平滑问题。同时,我们证明所提出的\\method{}减轻了梯度爆炸与消失问题,有助于训练深层多层级GNN。在基准数据集上的大量实验验证了所提\\method{}相较于基线方法的优越性。