In this paper, we propose a Differentially Private Stochastic Gradient Push with Compressed communication (termed DP-CSGP) for decentralized learning over directed graphs. Different from existing works, the proposed algorithm is designed to maintain high model utility while ensuring both rigorous differential privacy (DP) guarantees and efficient communication. For general non-convex and smooth objective functions, we show that the proposed algorithm achieves a tight utility bound of $\mathcal{O}\left( \sqrt{d\log \left( \frac{1}δ \right)}/(\sqrt{n}Jε) \right)$ ($J$ and $d$ are the number of local samples and the dimension of decision variables, respectively) with $\left(ε, δ\right)$-DP guarantee for each node, matching that of decentralized counterparts with exact communication. Extensive experiments on benchmark tasks show that, under the same privacy budget, DP-CSGP achieves comparable model accuracy with significantly lower communication cost than existing decentralized counterparts with exact communication.
翻译:本文提出了一种基于压缩通信的差分隐私随机梯度推送算法(简称DP-CSGP),用于有向图上的去中心化学习。与现有工作不同,该算法旨在保持高模型效用的同时,确保严格的差分隐私(DP)保证和高效的通信效率。针对一般的非凸光滑目标函数,我们证明该算法在满足每个节点(ε, δ)-DP保证的前提下,达到紧致的效用界$\mathcal{O}\left( \sqrt{d\log \left( \frac{1}δ \right)}/(\sqrt{n}Jε) \right)$(其中$J$和$d$分别表示本地样本数量和决策变量维度),与采用精确通信的去中心化算法性能相当。在基准任务上的大量实验表明,在相同隐私预算下,DP-CSGP在显著降低通信成本的同时,达到了与现有精确通信去中心化算法相当的模型精度。