Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Attack and Defense of Structured Prediction Models (2010.01610v2)

Published 4 Oct 2020 in cs.CL

Abstract: Building an effective adversarial attacker and elaborating on countermeasures for adversarial attacks for NLP have attracted a lot of research in recent years. However, most of the existing approaches focus on classification problems. In this paper, we investigate attacks and defenses for structured prediction tasks in NLP. Besides the difficulty of perturbing discrete words and the sentence fluency problem faced by attackers in any NLP tasks, there is a specific challenge to attackers of structured prediction models: the structured output of structured prediction models is sensitive to small perturbations in the input. To address these problems, we propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model with feedbacks from multiple reference models of the same structured prediction task. Based on the proposed attack, we further reinforce the victim model with adversarial training, making its prediction more robust and accurate. We evaluate the proposed framework in dependency parsing and part-of-speech tagging. Automatic and human evaluations show that our proposed framework succeeds in both attacking state-of-the-art structured prediction models and boosting them with adversarial training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Wenjuan Han (36 papers)
  2. Liwen Zhang (34 papers)
  3. Yong Jiang (194 papers)
  4. Kewei Tu (74 papers)
Citations (35)
Github Logo Streamline Icon: https://streamlinehq.com