Privacy-preserving machine learning (PPML) systems enable multiple data owners to collaboratively train models without revealing their raw, sensitive data by leveraging cryptographic protocols such as secure multi-party computation (MPC). While PPML offers strong privacy guarantees, it also introduces new attack surfaces: malicious data owners can inject poisoned data into the training process without being detected, thus undermining the integrity of the learned model. Although recent defenses, such as private input validation within MPC, can mitigate some specific poisoning strategies, they remain insufficient, particularly in preventing stealthy or distributed attacks. As the robustness of PPML remains an open challenge, strengthening trust in these systems increasingly necessitates post-hoc auditing mechanisms that instill accountability. In this paper we present UTrace, a framework for user-level traceback in PPML that attributes integrity failures to responsible data owners without compromising the privacy guarantees of MPC. UTrace encapsulates two mechanisms: a gradient similarity method that identifies suspicious update patterns linked to poisoning, and a user-level unlearning technique that quantifies each user's marginal influence on model behavior. Together, these methods allow UTrace to attribute model misbehavior to specific users with high precision. We implement UTrace within an MPC-compatible training and auditing pipeline and evaluate its effectiveness on four datasets spanning vision, text, and malware. Across ten canonical poisoning attacks, UTrace consistently achieves high detection accuracy with low false positive rates.
翻译:暂无翻译