Recent studies show that neural natural language processing (NLP) models are vulnerable to backdoor attacks. Injected with backdoors, models perform normally on benign examples but produce attacker-specified predictions when the backdoor is activated, presenting serious security threats to real-world applications. Since existing textual backdoor attacks pay little attention to the invisibility of backdoors, they can be easily detected and blocked. In this work, we present invisible backdoors that are activated by a learnable combination of word substitution. We show that NLP models can be injected with backdoors that lead to a nearly 100% attack success rate, whereas being highly invisible to existing defense strategies and even human inspections. The results raise a serious alarm to the security of NLP models, which requires further research to be resolved. All the data and code of this paper are released at https://github.com/thunlp/BkdAtk-LWS.
翻译:最近的研究显示,神经自然语言处理模式很容易受到后门攻击。 使用后门, 模型通常使用良性实例, 但却在后门启动时产生攻击者指定的预测, 给现实世界应用带来严重的安全威胁。 由于现有的文字后门攻击很少注意后门的隐形性, 很容易被检测和阻断。 在这项工作中, 我们展示了隐形的后门, 这些隐形的后门通过可学习的单词替换组合来激活。 我们显示, 后门可以注入NLP模型, 导致近100%的攻击成功率, 而对于现有的防御战略甚至人类检查来说, 却非常隐蔽。 结果对 NLP 模型的安全产生了严重的警示, 需要进一步研究解决。 本文的所有数据和代码都在 https://github. com/thunlp/ BkdAt-LWS 上公布 。