Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models (2405.13401v4)

Published 22 May 2024 in cs.CR and cs.CL

Abstract: LLMs have raised concerns about potential security threats despite performing significantly in NLP. Backdoor attacks initially verified that LLM is doing substantial harm at all stages, but the cost and robustness have been criticized. Attacking LLMs is inherently risky in security review, while prohibitively expensive. Besides, the continuous iteration of LLMs will degrade the robustness of backdoors. In this paper, we propose TrojanRAG, which employs a joint backdoor attack in the Retrieval-Augmented Generation, thereby manipulating LLMs in universal attack scenarios. Specifically, the adversary constructs elaborate target contexts and trigger sets. Multiple pairs of backdoor shortcuts are orthogonally optimized by contrastive learning, thus constraining the triggering conditions to a parameter subspace to improve the matching. To improve the recall of the RAG for the target contexts, we introduce a knowledge graph to construct structured data to achieve hard matching at a fine-grained level. Moreover, we normalize the backdoor scenarios in LLMs to analyze the real harm caused by backdoors from both attackers' and users' perspectives and further verify whether the context is a favorable tool for jailbreaking models. Extensive experimental results on truthfulness, language understanding, and harmfulness show that TrojanRAG exhibits versatility threats while maintaining retrieval capabilities on normal queries.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Pengzhou Cheng (17 papers)
  2. Yidong Ding (2 papers)
  3. Tianjie Ju (16 papers)
  4. Zongru Wu (13 papers)
  5. Wei Du (124 papers)
  6. Ping Yi (11 papers)
  7. Zhuosheng Zhang (125 papers)
  8. Gongshen Liu (37 papers)
Citations (15)