Certified Robustness to Text Adversarial Attacks by Randomized [MASK] (2105.03743v3)
Abstract: Recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions. However, all existing certified defense methods assume that the defenders are informed of how the adversaries generate synonyms, which is not a realistic scenario. In this paper, we propose a certifiably robust defense method by randomly masking a certain proportion of the words in an input text, in which the above unrealistic assumption is no longer necessary. The proposed method can defend against not only word substitution-based attacks, but also character-level perturbations. We can certify the classifications of over 50% texts to be robust to any perturbation of 5 words on AGNEWS, and 2 words on SST2 dataset. The experimental results show that our randomized smoothing method significantly outperforms recently proposed defense methods across multiple datasets.
- Jiehang Zeng (5 papers)
- Xiaoqing Zheng (44 papers)
- Jianhan Xu (8 papers)
- Linyang Li (57 papers)
- Liping Yuan (13 papers)
- Xuanjing Huang (287 papers)