Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Backdoor Attacks for In-Context Learning with Language Models (2307.14692v1)

Published 27 Jul 2023 in cs.CR

Abstract: Because state-of-the-art LLMs are expensive to train, most practitioners must make use of one of the few publicly available LLMs or LLM APIs. This consolidation of trust increases the potency of backdoor attacks, where an adversary tampers with a machine learning model in order to make it perform some malicious behavior on inputs that contain a predefined backdoor trigger. We show that the in-context learning ability of LLMs significantly complicates the question of developing backdoor attacks, as a successful backdoor must work against various prompting strategies and should not affect the model's general purpose capabilities. We design a new attack for eliciting targeted misclassification when LLMs are prompted to perform a particular target task and demonstrate the feasibility of this attack by backdooring multiple LLMs ranging in size from 1.3 billion to 6 billion parameters. Finally we study defenses to mitigate the potential harms of our attack: for example, while in the white-box setting we show that fine-tuning models for as few as 500 steps suffices to remove the backdoor behavior, in the black-box setting we are unable to develop a successful defense that relies on prompt engineering alone.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nikhil Kandpal (12 papers)
  2. Matthew Jagielski (51 papers)
  3. Florian Tramèr (87 papers)
  4. Nicholas Carlini (101 papers)
Citations (66)