2000 character limit reached
Visual Attack and Defense on Text (2008.10356v1)
Published 7 Aug 2020 in cs.CV and cs.AI
Abstract: Modifying characters of a piece of text to their visual similar ones often ap-pear in spam in order to fool inspection systems and other conditions, which we regard as a kind of adversarial attack to neural models. We pro-pose a way of generating such visual text attack and show that the attacked text are readable by humans but mislead a neural classifier greatly. We ap-ply a vision-based model and adversarial training to defense the attack without losing the ability to understand normal text. Our results also show that visual attack is extremely sophisticated and diverse, more work needs to be done to solve this.
- Shengjun Liu (6 papers)
- Ningkang Jiang (1 paper)
- Yuanbin Wu (47 papers)