2000 character limit reached
Read Over the Lines: Attacking LLMs and Toxicity Detection Systems with ASCII Art to Mask Profanity (2409.18708v4)
Published 27 Sep 2024 in cs.CL, cs.AI, and cs.CR
Abstract: We introduce a novel family of adversarial attacks that exploit the inability of LLMs to interpret ASCII art. To evaluate these attacks, we propose the ToxASCII benchmark and develop two custom ASCII art fonts: one leveraging special tokens and another using text-filled letter shapes. Our attacks achieve a perfect 1.0 Attack Success Rate across ten models, including OpenAI's o1-preview and LLaMA 3.1. Warning: this paper contains examples of toxic language used for research purposes.