As Federated Learning (FL) becomes more widespread, there is growing interest in its decentralized variants. Decentralized FL leverages the benefits of fast and energy-efficient device-to-device communications to obviate the need for a central server. However, this opens the door to new security vulnerabilities as well. While FL security has been a popular research topic, the role of adversarial node placement in decentralized FL remains largely unexplored. This paper addresses this gap by evaluating the impact of various coordinated adversarial node placement strategies on decentralized FL's model training performance. We adapt two threads of placement strategies to this context: maximum span-based algorithms, and network centrality-based approaches. Building on them, we propose a novel attack strategy, MaxSpAN-FL, which is a hybrid between these paradigms that adjusts node placement probabilistically based on network topology characteristics. Numerical experiments demonstrate that our attack consistently induces the largest degradation in decentralized FL models compared with baseline schemes across various network configurations and numbers of coordinating adversaries. We also provide theoretical support for why eigenvector centrality-based attacks are suboptimal in decentralized FL. Overall, our findings provide valuable insights into the vulnerabilities of decentralized FL systems, setting the stage for future research aimed at developing more secure and robust decentralized FL frameworks.
翻译:随着联邦学习(FL)的日益普及,其去中心化变体引起了越来越多的关注。去中心化联邦学习利用快速且节能的设备间通信优势,消除了对中心服务器的需求。然而,这也带来了新的安全漏洞。尽管联邦学习的安全性一直是热门研究课题,但对抗性节点部署在去中心化联邦学习中的作用在很大程度上仍未得到充分探索。本文通过评估各种协调对抗性节点部署策略对去中心化联邦学习模型训练性能的影响,填补了这一空白。我们在此背景下调整了两类部署策略:基于最大跨度的算法和基于网络中心性的方法。在此基础上,我们提出了一种新颖的攻击策略MaxSpAN-FL,它是这些范式的混合体,根据网络拓扑特征以概率方式调整节点部署。数值实验表明,与基线方案相比,我们的攻击在各种网络配置和协调对抗者数量的情况下,始终导致去中心化联邦学习模型性能的最大程度下降。我们还从理论上解释了为什么基于特征向量中心性的攻击在去中心化联邦学习中并非最优。总体而言,我们的研究结果为去中心化联邦学习系统的脆弱性提供了有价值的见解,为未来开发更安全、更鲁棒的去中心化联邦学习框架的研究奠定了基础。