Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Certified Robustness to Text Adversarial Attacks by Randomized [MASK] (2105.03743v3)

Published 8 May 2021 in cs.CL

Abstract: Recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions. However, all existing certified defense methods assume that the defenders are informed of how the adversaries generate synonyms, which is not a realistic scenario. In this paper, we propose a certifiably robust defense method by randomly masking a certain proportion of the words in an input text, in which the above unrealistic assumption is no longer necessary. The proposed method can defend against not only word substitution-based attacks, but also character-level perturbations. We can certify the classifications of over 50% texts to be robust to any perturbation of 5 words on AGNEWS, and 2 words on SST2 dataset. The experimental results show that our randomized smoothing method significantly outperforms recently proposed defense methods across multiple datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiehang Zeng (5 papers)
  2. Xiaoqing Zheng (44 papers)
  3. Jianhan Xu (8 papers)
  4. Linyang Li (57 papers)
  5. Liping Yuan (13 papers)
  6. Xuanjing Huang (287 papers)
Citations (59)

Summary

We haven't generated a summary for this paper yet.