Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions (2201.00768v1)

Published 3 Jan 2022 in cs.CL, cs.AI, cs.CR, cs.HC, and cs.LG

Abstract: Recent NLP techniques have accomplished high performance on benchmark datasets, primarily due to the significant improvement in the performance of deep learning. The advances in the research community have led to great enhancements in state-of-the-art production systems for NLP tasks, such as virtual assistants, speech recognition, and sentiment analysis. However, such NLP systems still often fail when tested with adversarial attacks. The initial lack of robustness exposed troubling gaps in current models' language understanding capabilities, creating problems when NLP systems are deployed in real life. In this paper, we present a structured overview of NLP robustness research by summarizing the literature in a systemic way across various dimensions. We then take a deep-dive into the various dimensions of robustness, across techniques, metrics, embeddings, and benchmarks. Finally, we argue that robustness should be multi-dimensional, provide insights into current research, identify gaps in the literature to suggest directions worth pursuing to address these gaps.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Marwan Omar (13 papers)
  2. Soohyeon Choi (6 papers)
  3. DaeHun Nyang (30 papers)
  4. David Mohaisen (43 papers)
Citations (51)

Summary

We haven't generated a summary for this paper yet.