Adversarial attacks are a serious threat to the reliable deployment of machine learning models in safety-critical applications. They can misguide current models to predict incorrectly by slightly modifying the inputs. Recently, substantial work has shown that adversarial examples tend to deviate from the underlying data manifold of normal examples, whereas pre-trained masked language models can fit the manifold of normal NLP data. To explore how to use the masked language model in adversarial detection, we propose a novel textual adversarial example detection method, namely Masked Language Model-based Detection (MLMD), which can produce clearly distinguishable signals between normal examples and adversarial examples by exploring the changes in manifolds induced by the masked language model. MLMD features a plug and play usage (i.e., no need to retrain the victim model) for adversarial defense and it is agnostic to classification tasks, victim model's architectures, and to-be-defended attack methods. We evaluate MLMD on various benchmark textual datasets, widely studied machine learning models, and state-of-the-art (SOTA) adversarial attacks (in total $3*4*4 = 48$ settings). Experimental results show that MLMD can achieve strong performance, with detection accuracy up to 0.984, 0.967, and 0.901 on AG-NEWS, IMDB, and SST-2 datasets, respectively. Additionally, MLMD is superior, or at least comparable to, the SOTA detection defenses in detection accuracy and F1 score. Among many defenses based on the off-manifold assumption of adversarial examples, this work offers a new angle for capturing the manifold change. The code for this work is openly accessible at \url{https://github.com/mlmddetection/MLMDdetection}.
翻译:对抗攻击是机器学习模型在安全关键应用中可靠部署面临的严峻威胁。微小地修改输入可以误导当前模型的预测结果。最近的研究表明,对抗样本往往偏离正常样本的潜在数据流形,而预训练的掩码语言模型可以拟合正常NLP数据的流形。为了探索如何使用这种掩码语言模型进行对抗检测,我们提出了一种新型的文本对抗样本检测方法,即基于掩码语言模型的检测(MLMD),它可以通过探索被掩码语言模型诱导的流形变化,产生明显区别正常样本和对抗样本之间的信号。MLMD特点是可插入使用(即无需重新训练受害模型)的对抗防御方法,并且它是无视分类任务、受害模型的架构和待防御的攻击方法的。我们在各种基准文本数据集、广泛研究的机器学习模型以及最先进的(48个设置,包括$3*4*4$)对抗攻击上评估了MLMD的性能。实验结果表明,MLMD可以取得强大的性能,在AG-NEWS、IMDB和SST-2数据集上的检测准确性高达0.984、0.967和0.901。此外,MLMD在检测准确性和F1得分方面优于最先进的对抗检测方法或至少与之相当。在许多基于离流形假设的对抗防御方法中,本文提供了捕获流形变化的新视角。本文的代码可以在\url{https://github.com/mlmddetection/MLMDdetection} 中公开访问。