Federated learning (FL) offers privacy preserving, distributed machine learning, allowing clients to contribute to a global model without revealing their local data. As models increasingly serve as monetizable digital assets, the ability to prove participation in their training becomes essential for establishing ownership. In this paper, we address this emerging need by introducing FedPoP, a novel FL framework that allows nonlinkable proof of participation while preserving client anonymity and privacy without requiring either extensive computations or a public ledger. FedPoP is designed to seamlessly integrate with existing secure aggregation protocols to ensure compatibility with real-world FL deployments. We provide a proof of concept implementation and an empirical evaluation under realistic client dropouts. In our prototype, FedPoP introduces 0.97 seconds of per-round overhead atop securely aggregated FL and enables a client to prove its participation/contribution to a model held by a third party in 0.0612 seconds. These results indicate FedPoP is practical for real-world deployments that require auditable participation without sacrificing privacy.
翻译:联邦学习(FL)提供了一种隐私保护的分布式机器学习方法,允许客户端在不暴露本地数据的情况下共同训练全局模型。随着模型日益成为可货币化的数字资产,证明其参与训练过程的能力对于确立所有权至关重要。本文针对这一新兴需求,提出了FedPoP——一种新颖的联邦学习框架,能够在无需大量计算或公共账本的前提下,实现不可关联的参与证明,同时保护客户端的匿名性和隐私。FedPoP设计为可与现有安全聚合协议无缝集成,确保与实际联邦学习部署的兼容性。我们提供了概念验证实现,并在真实场景下的客户端掉线情况下进行了实证评估。在原型系统中,FedPoP在安全聚合的联邦学习基础上每轮仅引入0.97秒的开销,并使客户端能在0.0612秒内向第三方证明其对特定模型的参与/贡献。这些结果表明,FedPoP适用于实际部署场景,能够在无需牺牲隐私的前提下实现可审计的参与验证。