Large models adaptation through Federated Learning (FL) addresses a wide range of use cases and is enabled by Parameter-Efficient Fine-Tuning techniques such as Low-Rank Adaptation (LoRA). However, this distributed learning paradigm faces several security threats, particularly to its integrity, such as backdoor attacks that aim to inject malicious behavior during the local training steps of certain clients. We present the first analysis of the influence of LoRA on state-of-the-art backdoor attacks targeting model adaptation in FL. Specifically, we focus on backdoor lifespan, a critical characteristic in FL, that can vary depending on the attack scenario and the attacker's ability to effectively inject the backdoor. A key finding in our experiments is that for an optimally injected backdoor, the backdoor persistence after the attack is longer when the LoRA's rank is lower. Importantly, our work highlights evaluation issues of backdoor attacks against FL and contributes to the development of more robust and fair evaluations of backdoor attacks, enhancing the reliability of risk assessments for critical FL systems. Our code is publicly available.
翻译:通过联邦学习(FL)进行大模型适应可满足广泛的应用场景,并得益于参数高效微调技术,如低秩适应(LoRA)。然而,这种分布式学习范式面临多种安全威胁,尤其是完整性方面的威胁,例如后门攻击,其目标是在特定客户端的本地训练步骤中注入恶意行为。我们首次分析了LoRA对针对FL中模型适应的最先进后门攻击的影响。具体而言,我们关注后门生命周期——这是FL中的一个关键特性,其可能因攻击场景和攻击者有效注入后门的能力而异。我们实验中的一个关键发现是,对于最优注入的后门,当LoRA的秩较低时,攻击后后门的持久性更长。重要的是,我们的工作揭示了针对FL的后门攻击评估中存在的问题,并有助于开发更稳健和公平的后门攻击评估方法,从而提升关键FL系统风险评估的可靠性。我们的代码已公开提供。