Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection (2505.06493v1)

Published 10 May 2025 in cs.CR and cs.AI

Abstract: LLMs have gained widespread adoption across diverse applications due to their impressive generative capabilities. Their plug-and-play nature enables both developers and end users to interact with these models through simple prompts. However, as LLMs become more integrated into various systems in diverse domains, concerns around their security are growing. Existing studies mainly focus on threats arising from user prompts (e.g. prompt injection attack) and model output (e.g. model inversion attack), while the security of system prompts remains largely overlooked. This work bridges the critical gap. We introduce system prompt poisoning, a new attack vector against LLMs that, unlike traditional user prompt injection, poisons system prompts hence persistently impacts all subsequent user interactions and model responses. We systematically investigate four practical attack strategies in various poisoning scenarios. Through demonstration on both generative and reasoning LLMs, we show that system prompt poisoning is highly feasible without requiring jailbreak techniques, and effective across a wide range of tasks, including those in mathematics, coding, logical reasoning, and natural language processing. Importantly, our findings reveal that the attack remains effective even when user prompts employ advanced prompting techniques like chain-of-thought (CoT). We also show that such techniques, including CoT and retrieval-augmentation-generation (RAG), which are proven to be effective for improving LLM performance in a wide range of tasks, are significantly weakened in their effectiveness by system prompt poisoning.

System Prompt Poisoning: Persistent Attacks on LLMs

The proliferation of LLMs necessitates a comprehensive understanding of their security vulnerabilities. The paper "System Prompt Poisoning: Persistent Attacks on LLMs Beyond User Injection" explores a largely overlooked aspect of LLM security, offering a novel perspective on the longstanding challenge of ensuring the integrity of these advanced AI systems.

Introduction to System Prompt Poisoning

This research introduces "system prompt poisoning," an attack vector aimed at compromising LLMs by manipulating system prompts rather than user prompts. Unlike traditional user prompt injection or model inversion attacks, where the focus is either on perturbing user inputs or extracting sensitive model information, system prompt poisoning targets the foundational instructions that underpin the model’s operational paradigm. By modifying the system prompts, attackers can achieve persistent effects that influence subsequent user interactions, thereby posing a substantial threat to the security and reliability of LLMs.

The authors systematically investigated four distinct attack strategies under diverse scenarios, demonstrating the feasibility of system prompt poisoning across both generative and reasoning models. The effectiveness of these strategies holds regardless of user prompt sophistication, including scenarios utilizing advanced techniques such as chain-of-thought (CoT) and retrieval-augmentation-generation (RAG). The findings suggest that even highly touted methods for enhancing LLM performance can be significantly undermined by system prompt poisoning.

Numerical Results and Claims

The experiments conducted reveal that system prompt poisoning is highly effective and feasible, without necessitating jailbreak techniques. For example, experiments showed that poisoned prompts drastically diminished the accuracy of LLM tasks across various domains such as mathematics, coding, and natural language processing. This evidences significant vulnerabilities, particularly when prompt augmentation strategies fail to counteract the poisoning effects.

Implications and Speculation

Several implications unfold from this paper. Practically, it urges a reconsideration of LLM deployment strategies, emphasizing the importance of securing system prompts against potential manipulations. Theoretically, it opens new avenues for research into understanding how foundational prompt instructions can be made robust against adversarial exploitation. It also challenges the community to develop countermeasures that can effectively safeguard system integrity.

Looking ahead, future developments in AI applications should incorporate security principles that encompass both user and system prompts, aiming to achieve a comprehensive security framework. Additionally, the research could pave the way for integrating real-time monitoring tools capable of detecting anomalies in system prompt interactions.

Conclusion

This paper’s exploration of system prompt poisoning underscores the necessity of ongoing vigilance and adaptation in the field of LLM security. It extends beyond the conventional domain of user prompt injections, shedding light on deeper systemic vulnerabilities within LLM frameworks. Researchers and developers in the AI domain must prioritize the robustness of system prompts to ensure the sustained efficacy and safety of LLM technologies. As we continue to integrate AI into critical applications, understanding and mitigating these novel attack vectors remains paramount for advancing secure AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Jiawei Guo (16 papers)
  2. Haipeng Cai (20 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com