Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Phantom: General Trigger Attacks on Retrieval Augmented Language Generation (2405.20485v2)

Published 30 May 2024 in cs.CR, cs.CL, and cs.LG

Abstract: Retrieval Augmented Generation (RAG) expands the capabilities of modern LLMs, by anchoring, adapting, and personalizing their responses to the most relevant knowledge sources. It is particularly useful in chatbot applications, allowing developers to customize LLM output without expensive retraining. Despite their significant utility in various applications, RAG systems present new security risks. In this work, we propose new attack vectors that allow an adversary to inject a single malicious document into a RAG system's knowledge base, and mount a backdoor poisoning attack. We design Phantom, a general two-stage optimization framework against RAG systems, that crafts a malicious poisoned document leading to an integrity violation in the model's output. First, the document is constructed to be retrieved only when a specific trigger sequence of tokens appears in the victim's queries. Second, the document is further optimized with crafted adversarial text that induces various adversarial objectives on the LLM output, including refusal to answer, reputation damage, privacy violations, and harmful behaviors. We demonstrate our attacks on multiple LLM architectures, including Gemma, Vicuna, and Llama, and show that they transfer to GPT-3.5 Turbo and GPT-4. Finally, we successfully conducted a Phantom attack on NVIDIA's black-box production RAG system, "Chat with RTX".

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Harsh Chaudhari (13 papers)
  2. Giorgio Severi (11 papers)
  3. John Abascal (4 papers)
  4. Matthew Jagielski (51 papers)
  5. Christopher A. Choquette-Choo (49 papers)
  6. Milad Nasr (48 papers)
  7. Cristina Nita-Rotaru (29 papers)
  8. Alina Oprea (56 papers)
Citations (13)