Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks (2206.14729v1)

Published 29 Jun 2022 in cs.CL, cs.AI, and cs.HC

Abstract: Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability. Here, we describe the approach of the team "longhorns" on Task 1 of the The First Workshop on Dynamic Adversarial Data Collection (DADC), which asked teams to manually fool a model on an Extractive Question Answering task. Our team finished first, with a model error rate of 62%. We advocate for a systematic, linguistically informed approach to formulating adversarial questions, and we describe the results of our pilot experiments, as well as our official submission.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Venelin Kovatchev (12 papers)
  2. Trina Chatterjee (2 papers)
  3. Venkata S Govindarajan (6 papers)
  4. Jifan Chen (12 papers)
  5. Eunsol Choi (76 papers)
  6. Gabriella Chronis (2 papers)
  7. Anubrata Das (12 papers)
  8. Katrin Erk (23 papers)
  9. Matthew Lease (57 papers)
  10. Junyi Jessy Li (79 papers)
  11. Yating Wu (9 papers)
  12. Kyle Mahowald (40 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.