Heterogeneous Graph Neural Networks (HGNNs) are effective for modeling Heterogeneous Information Networks (HINs), which encode complex multi-typed entities and relations. However, HGNNs often suffer from type information loss and structural noise, limiting their representational fidelity and generalization. We propose THeGAU, a model-agnostic framework that combines a type-aware graph autoencoder with guided graph augmentation to improve node classification. THeGAU reconstructs schema-valid edges as an auxiliary task to preserve node-type semantics and introduces a decoder-driven augmentation mechanism to selectively refine noisy structures. This joint design enhances robustness, accuracy, and efficiency while significantly reducing computational overhead. Extensive experiments on three benchmark HIN datasets (IMDB, ACM, and DBLP) demonstrate that THeGAU consistently outperforms existing HGNN methods, achieving state-of-the-art performance across multiple backbones.
翻译:异质图神经网络(HGNNs)在建模异质信息网络(HINs)方面表现优异,能够有效编码包含多类型实体与关系的复杂结构。然而,HGNNs常面临类型信息丢失与结构噪声问题,限制了其表示保真度与泛化能力。本文提出THeGAU,一种模型无关的框架,通过结合类型感知的图自编码器与引导式图增强技术来提升节点分类性能。THeGAU以重建符合图模式的边作为辅助任务,以保持节点类型语义,并引入解码器驱动的增强机制来选择性优化噪声结构。该联合设计增强了模型的鲁棒性、准确性与效率,同时显著降低了计算开销。在三个基准HIN数据集(IMDB、ACM和DBLP)上的大量实验表明,THeGAU在多种骨干网络上均持续优于现有HGNN方法,取得了最先进的性能。