Membership Inference Attacks (MIA) enable to empirically assess the privacy of a machine learning algorithm. In this paper, we propose TAMIS, a novel MIA against differentially-private synthetic data generation methods that rely on graphical models. This attack builds upon MAMA-MIA, a recently-published state-of-the-art method. It lowers its computational cost and requires less attacker knowledge. Our attack is the product of a two-fold improvement. First, we recover the graphical model having generated a synthetic dataset by using solely that dataset, rather than shadow-modeling over an auxiliary one. This proves less costly and more performant. Second, we introduce a more mathematically-grounded attack score, that provides a natural threshold for binary predictions. In our experiments, TAMIS achieves better or similar performance as MAMA-MIA on replicas of the SNAKE challenge.
翻译:成员推理攻击(MIA)能够实证评估机器学习算法的隐私性。本文提出TAMIS,一种针对基于图模型的差分隐私合成数据生成方法的新型MIA。该攻击建立在近期发表的先进方法MAMA-MIA基础上,降低了计算成本并减少了对攻击者知识的需求。我们的攻击通过双重改进实现:首先,仅利用合成数据集本身恢复生成该数据集的图模型,而非在辅助数据集上进行影子建模,这被证明成本更低且性能更优;其次,我们引入了一种更具数学依据的攻击评分,为二元预测提供了自然阈值。在实验中,TAMIS在SNAKE挑战赛数据副本上取得了优于或相当于MAMA-MIA的性能。