Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack (2210.15221v1)

Published 27 Oct 2022 in cs.CL and cs.AI

Abstract: We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models that produces fluent and grammatical adversarial contexts while maintaining gold answers. Despite phenomenal progress on general adversarial attacks, few works have investigated the vulnerability and attack specifically for QA models. In this work, we first explore the biases in the existing models and discover that they mainly rely on keyword matching between the question and context, and ignore the relevant contextual relations for answer prediction. Based on two biases above, TASA attacks the target model in two folds: (1) lowering the model's confidence on the gold answer with a perturbed answer sentence; (2) misguiding the model towards a wrong answer with a distracting answer sentence. Equipped with designed beam search and filtering methods, TASA can generate more effective attacks than existing textual attack methods while sustaining the quality of contexts, in extensive experiments on five QA datasets and human evaluations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yu Cao (129 papers)
  2. Dianqi Li (18 papers)
  3. Meng Fang (100 papers)
  4. Tianyi Zhou (172 papers)
  5. Jun Gao (267 papers)
  6. Yibing Zhan (73 papers)
  7. Dacheng Tao (829 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.