Ensuring fairness in Graph Neural Networks is fundamental to promoting trustworthy and socially responsible machine learning systems. In response, numerous fair graph learning methods have been proposed in recent years. However, most of them assume full access to demographic information, a requirement rarely met in practice due to privacy, legal, or regulatory restrictions. To this end, this paper introduces a novel fair graph learning framework that mitigates bias in graph learning under limited demographic information. Specifically, we propose a mechanism guided by partial demographic data to generate proxies for demographic information and design a strategy that enforces consistent node embeddings across demographic groups. In addition, we develop an adaptive confidence strategy that dynamically adjusts each node's contribution to fairness and utility based on prediction confidence. We further provide theoretical analysis demonstrating that our framework, FairGLite, achieves provable upper bounds on group fairness metrics, offering formal guarantees for bias mitigation. Through extensive experiments on multiple datasets and fair graph learning frameworks, we demonstrate the framework's effectiveness in both mitigating bias and maintaining model utility.
翻译:确保图神经网络的公平性是构建可信赖且具有社会责任的机器学习系统的基石。为此,近年来涌现了大量公平图学习方法。然而,大多数方法假设能够完全获取人口统计信息,这一要求在实践中因隐私、法律或监管限制而鲜能满足。为此,本文提出了一种新颖的公平图学习框架,旨在有限人口统计信息条件下缓解图学习中的偏差。具体而言,我们提出了一种由部分人口统计数据引导的机制,用于生成人口统计信息的代理,并设计了一种策略,以确保跨人口统计组的节点嵌入具有一致性。此外,我们开发了一种自适应置信度策略,该策略根据预测置信度动态调整每个节点对公平性和效用的贡献。我们进一步提供了理论分析,证明我们的框架 FairGLite 在群体公平性指标上达到了可证明的上界,为偏差缓解提供了形式化保证。通过在多个数据集和公平图学习框架上进行广泛实验,我们证明了该框架在缓解偏差和保持模型效用方面的有效性。