Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation (2010.02338v1)

Published 5 Oct 2020 in cs.CL

Abstract: NLP models are shown to suffer from robustness issues, i.e., a model's prediction can be easily changed under small perturbations to the input. In this work, we present a Controlled Adversarial Text Generation (CAT-Gen) model that, given an input text, generates adversarial texts through controllable attributes that are known to be invariant to task labels. For example, in order to attack a model for sentiment classification over product reviews, we can use the product categories as the controllable attribute which would not change the sentiment of the reviews. Experiments on real-world NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts, compared to many existing adversarial text generation approaches. We further use our generated adversarial examples to improve models through adversarial training, and we demonstrate that our generated attacks are more robust against model re-training and different model architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Tianlu Wang (33 papers)
  2. Xuezhi Wang (64 papers)
  3. Yao Qin (41 papers)
  4. Ben Packer (11 papers)
  5. Kang Li (207 papers)
  6. Jilin Chen (32 papers)
  7. Alex Beutel (52 papers)
  8. Ed Chi (24 papers)
Citations (78)

Summary

We haven't generated a summary for this paper yet.