2000 character limit reached
Adv-OLM: Generating Textual Adversaries via OLM (2101.08523v1)
Published 21 Jan 2021 in cs.CL, cs.AI, and cs.LG
Abstract: Deep learning models are susceptible to adversarial examples that have imperceptible perturbations in the original input, resulting in adversarial attacks against these models. Analysis of these attacks on the state of the art transformers in NLP can help improve the robustness of these models against such adversarial inputs. In this paper, we present Adv-OLM, a black-box attack method that adapts the idea of Occlusion and LLMs (OLM) to the current state of the art attack methods. OLM is used to rank words of a sentence, which are later substituted using word replacement strategies. We experimentally show that our approach outperforms other attack methods for several text classification tasks.
- Vijit Malik (7 papers)
- Ashwani Bhat (3 papers)
- Ashutosh Modi (60 papers)