Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples (2209.02128v1)
Abstract: Recent advances in the development of LLMs have resulted in public access to state-of-the-art pre-trained LLMs (PLMs), including Generative Pre-trained Transformer 3 (GPT-3) and Bidirectional Encoder Representations from Transformers (BERT). However, evaluations of PLMs, in practice, have shown their susceptibility to adversarial attacks during the training and fine-tuning stages of development. Such attacks can result in erroneous outputs, model-generated hate speech, and the exposure of users' sensitive information. While existing research has focused on adversarial attacks during either the training or the fine-tuning of PLMs, there is a deficit of information on attacks made between these two development phases. In this work, we highlight a major security vulnerability in the public release of GPT-3 and further investigate this vulnerability in other state-of-the-art PLMs. We restrict our work to pre-trained models that have not undergone fine-tuning. Further, we underscore token distance-minimized perturbations as an effective adversarial approach, bypassing both supervised and unsupervised quality measures. Following this approach, we observe a significant decrease in text classification quality when evaluating for semantic similarity.
- Hezekiah J. Branch (1 paper)
- Jonathan Rodriguez Cefalu (2 papers)
- Jeremy McHugh (2 papers)
- Leyla Hujer (1 paper)
- Aditya Bahl (1 paper)
- Daniel del Castillo Iglesias (1 paper)
- Ron Heichman (1 paper)
- Ramesh Darwishi (1 paper)