Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Jatmo: Prompt Injection Defense by Task-Specific Finetuning (2312.17673v2)

Published 29 Dec 2023 in cs.CR, cs.AI, and cs.CL

Abstract: LLMs are attracting significant research attention due to their instruction-following abilities, allowing users and developers to leverage LLMs for a variety of tasks. However, LLMs are vulnerable to prompt-injection attacks: a class of attacks that hijack the model's instruction-following abilities, changing responses to prompts to undesired, possibly malicious ones. In this work, we introduce Jatmo, a method for generating task-specific models resilient to prompt-injection attacks. Jatmo leverages the fact that LLMs can only follow instructions once they have undergone instruction tuning. It harnesses a teacher instruction-tuned model to generate a task-specific dataset, which is then used to fine-tune a base model (i.e., a non-instruction-tuned model). Jatmo only needs a task prompt and a dataset of inputs for the task: it uses the teacher model to generate outputs. For situations with no pre-existing datasets, Jatmo can use a single example, or in some cases none at all, to produce a fully synthetic dataset. Our experiments on seven tasks show that Jatmo models provide similar quality of outputs on their specific task as standard LLMs, while being resilient to prompt injections. The best attacks succeeded in less than 0.5% of cases against our models, versus 87% success rate against GPT-3.5-Turbo. We release Jatmo at https://github.com/wagner-group/prompt-injection-defense.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Julien Piet (8 papers)
  2. Maha Alrashed (2 papers)
  3. Chawin Sitawarin (26 papers)
  4. Sizhe Chen (23 papers)
  5. Zeming Wei (24 papers)
  6. Elizabeth Sun (1 paper)
  7. Basel Alomair (14 papers)
  8. David Wagner (67 papers)
Citations (32)
Github Logo Streamline Icon: https://streamlinehq.com