Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization (2212.09254v1)

Published 19 Dec 2022 in cs.CL

Abstract: Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in NLP. However, in contrast to the computer vision domain where the first-order projected gradient descent (PGD) is used as the benchmark approach to generate adversarial examples for robustness evaluation, there lacks a principled first-order gradient-based robustness evaluation framework in NLP. The emerging optimization challenges lie in 1) the discrete nature of textual inputs together with the strong coupling between the perturbation location and the actual content, and 2) the additional constraint that the perturbed text should be fluent and achieve a low perplexity under a LLM. These challenges make the development of PGD-like NLP attacks difficult. To bridge the gap, we propose TextGrad, a new attack generator using gradient-driven optimization, supporting high-accuracy and high-quality assessment of adversarial robustness in NLP. Specifically, we address the aforementioned challenges in a unified optimization framework. And we develop an effective convex relaxation method to co-optimize the continuously-relaxed site selection and perturbation variables and leverage an effective sampling method to establish an accurate mapping from the continuous optimization variables to the discrete textual perturbations. Moreover, as a first-order attack generation method, TextGrad can be baked into adversarial training to further improve the robustness of NLP models. Extensive experiments are provided to demonstrate the effectiveness of TextGrad not only in attack generation for robustness evaluation but also in adversarial defense.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bairu Hou (14 papers)
  2. Jinghan Jia (30 papers)
  3. Yihua Zhang (36 papers)
  4. Guanhua Zhang (24 papers)
  5. Yang Zhang (1132 papers)
  6. Sijia Liu (204 papers)
  7. Shiyu Chang (120 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.