Graph Neural Networks have demonstrated significant success in graph classification tasks, yet they often require substantial computational resources and struggle to capture global graph properties effectively. We introduce LightTopoGAT, a lightweight graph attention network that enhances node features through topological augmentation by incorporating node degree and local clustering coefficient to improve graph representation learning. The proposed approach maintains parameter efficiency through streamlined attention mechanisms while integrating structural information that is typically overlooked by local message passing schemes. Through comprehensive experiments on three benchmark datasets, MUTAG, ENZYMES, and PROTEINS, we show that LightTopoGAT achieves superior performance compared to established baselines including GCN, GraphSAGE, and standard GAT, with a 6.6 percent improvement in accuracy on MUTAG and a 2.2 percent improvement on PROTEINS. Ablation studies further confirm that these performance gains arise directly from the inclusion of topological features, demonstrating a simple yet effective strategy for enhancing graph neural network performance without increasing architectural complexity.
翻译:图神经网络在图分类任务中已展现出显著成功,但其通常需要大量计算资源,且难以有效捕获图的全局属性。本文提出LightTopoGAT,一种轻量级图注意力网络,通过融入节点度与局部聚类系数进行拓扑增强以改进节点特征,从而提升图表示学习能力。该方法通过简化的注意力机制保持参数效率,同时整合了局部消息传递机制通常忽略的结构信息。通过在MUTAG、ENZYMES和PROTEINS三个基准数据集上的系统实验,我们证明LightTopoGAT相较于GCN、GraphSAGE及标准GAT等基线模型具有更优性能,在MUTAG数据集上准确率提升6.6%,在PROTEINS数据集上提升2.2%。消融实验进一步证实,这些性能提升直接源于拓扑特征的引入,展示了一种无需增加架构复杂度即可增强图神经网络性能的简洁而有效的策略。