Linearized Graph Neural Networks (GNNs) have attracted great attention in recent years for graph representation learning. Compared with nonlinear Graph Neural Network (GNN) models, linearized GNNs are much more time-efficient and can achieve comparable performances on typical downstream tasks such as node classification. Although some linearized GNN variants are purposely crafted to mitigate ``over-smoothing", empirical studies demonstrate that they still somehow suffer from this issue. In this paper, we instead relate over-smoothing with the vanishing gradient phenomenon and craft a gradient-free training framework to achieve more efficient and effective linearized GNNs which can significantly overcome over-smoothing and enhance the generalization of the model. The experimental results demonstrate that our methods achieve better and more stable performances on node classification tasks with varying depths and cost much less training time.
翻译:近些年来,线性图形神经网络(GNNs)在图示表达学方面引起了极大关注。与非线性图形神经网络(GNN)模型相比,线性GNNs更具有时间效率,能够在典型的下游任务(如节点分类)上取得可比较的业绩。虽然某些线性GNN变体(GNNs)是为了减少“超移动”而特意设计的,但实证研究表明,他们仍然在某种程度上受到这一问题的影响。在本文中,我们与梯度现象的消失有关,并设计了一个无梯度培训框架,以实现更高效、更有效的线性GNNS(GNN),从而可以大大克服超移动并增强模型的概括性。实验结果表明,我们的方法在零性分类任务上取得了更好、更稳定的业绩,其深度不同,培训时间要少得多。