Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
108 tokens/sec
GPT-4o
67 tokens/sec
Gemini 2.5 Pro Pro
54 tokens/sec
o3 Pro
13 tokens/sec
GPT-4.1 Pro
49 tokens/sec
DeepSeek R1 via Azure Pro
24 tokens/sec
2000 character limit reached

Prompt Injection Attacks

Updated 23 June 2025

Prompt injection attacks are a class of adversarial techniques that target LLM-integrated applications by manipulating input sequences such that the LLM disregards the application's intended instructions and instead executes commands or tasks desired by the attacker. These attacks exploit the LLM's strong instruction-following capability and inability to distinguish between legitimate system/user instructions and instructions inadvertently or maliciously introduced through untrusted data inputs. Prompt injection attacks present a widespread and evolving risk to LLM deployments, impacting a broad range of tasks from text generation to tool invocation and decision support.

1. Formalization and Taxonomy

Prompt injection attacks are rigorously defined by a formal framework that delineates legitimate (target) and injected (attacker) tasks. Let sts^t denote the target instruction prompt, xtx^t the legitimate data prompt, and (se,xe)(s^e, x^e) the attacker-injected instruction and data. A prompt injection attack manipulates xtx^t so that, when concatenated with sts^t, the LLM completes the injected task (se,xe)(s^e, x^e) instead of the intended one. The compromised data prompt can be represented as:

x~=A(xt,se,xe)\tilde{x} = \mathcal{A}(x^t, s^e, x^e)

with the LLM queried as stx~s^t \oplus \tilde{x}, where \oplus denotes concatenation (Liu et al., 2023 ).

Attack categorization is generally structured along two axes:

  1. Direct Prompt Injection – where the adversary inputs malicious instructions directly, e.g., role-play scenarios, developer modes, explicit ignore directives, or adversarial suffixes.
  2. Indirect Prompt Injection – where malicious instructions are injected into data later consumed by the LLM (e.g., emails, web content, training data poisoning) (Rossi et al., 31 Jan 2024 ).

A further division separates attacks according to mechanism: naive concatenation, escape character insertion, context ignoring ("Ignore previous instructions"), fake completion (fabricated system output), adversarially optimized triggers, and multi-stage attacks (Liu et al., 2023 , Pasquini et al., 6 Mar 2024 ).

2. Mechanisms and Practical Exploitation

Prompt injection exploits the compositional nature of prompt processing in LLMs. Inputs from application instructions and user data (or external sources) are sequenced as a flat string. The LLM, trained to follow instructions wherever they appear, cannot reliably separate privileged prompts from untrusted data, enabling the attacker to override or append new instructions. Attack efficacy is heightened by the LLM's recency and context-weighting behaviors—later/injected instructions are often prioritized.

Attack schemes documented include:

  • Naive Attack: x~=xtsexe\tilde{x} = x^t \oplus s^e \oplus x^e
  • Escape Characters: Injecting tokens (e.g. "\n") effect contextual switch
  • Context Ignoring: Inserting ignore directives before attack code
  • Fake Completion: Mimicking prompt boundaries or injecting fabricated responses
  • Combined Attack: Sequential combination of above for robust, generalized attack without requiring target task knowledge
  • Neural Exec/Optimized Attacks: Attack triggers generated by discrete optimization rather than manual design, yielding universal and diverse forms that resist pattern matching and filtering (Pasquini et al., 6 Mar 2024 )

Indirect attacks leverage vulnerabilities in software components that ingest external content—e.g., an LLM-powered email assistant extracting a hidden "forward all mail" instruction from an email (Rossi et al., 31 Jan 2024 ).

3. Systematic Evaluation and Benchmarks

Comprehensive benchmarks are pivotal for reliable evaluation. The Open-Prompt-Injection benchmark (Liu et al., 2023 ) standardizes assessment by including:

  • Multiple LLMs (e.g., PaLM 2, GPT-4, Llama 2, etc.)
  • Broad NLP task coverage (classification, NLI, grammar correction, summarization)
  • Distinct attack types and defense configurations
  • Quantitative metrics: Performance under No Attack (PNA), Attack Success Score (ASS), Matching Rate (MR)

Experimental findings highlight:

  • Combined attacks consistently achieve high attack success (ASS/MR ≥ 0.9) on all major models and tasks.
  • Larger, more capable LLMs exhibit higher vulnerability due to stronger instruction adherence.
  • Effectiveness is maintained regardless of context length or in-context learning ("shot" count).
  • The need for adaptive, optimization-based attacks and benchmark-driven evaluation frameworks for future research (Liu et al., 2023 , Liu et al., 15 Apr 2025 ).

4. Defense Strategies: Effectiveness and Limitations

Defense strategies are classified into prevention-based, detection-based, and hybrid categories:

Prevention-Based Defenses

  • Prompt Engineering: Use of delimiters, prompt isolation, hierarchy, sandwiching.
  • Instructional Prompting: Explicit instructions to ignore subsequent inputs.
  • Encoding-based Isolation: Encoding external data (e.g., Base64) to prevent instruction ambiguity.

These methods often provide limited efficacy against sophisticated attacks and can degrade LLM utility or be circumvented by determined adversaries (e.g., through adversarially optimized triggers or template mimicry). Paraphrasing ("utility hurting") is the only prevention defense with non-trivial but costly effectiveness (Liu et al., 2023 , Jia et al., 23 May 2025 , Zhang et al., 10 Apr 2025 ).

Detection-Based Defenses

  • Proactive Detection: Embedding known instructions or secrets and evaluating if the model outputs match expected targets. This approach achieves near-perfect attack suppression with minimal utility loss (Liu et al., 2023 ).
  • Embedding-based Classifiers: Machine learning classifiers (e.g., Random Forests, XGBoost) trained over prompt embeddings, outperforming neural encoders on prompt injection detection (Ayub et al., 29 Oct 2024 ).
  • Attention Pattern Analysis: Techniques (e.g., Attention Tracker) identify prompt injections via shifts in attention from instructions to injected content. These methods are effective under typical attacks but degrade rapidly with adversarially-adapted triggers (Hung et al., 1 Nov 2024 , Jia et al., 23 May 2025 ).
  • Game-Theoretic Defenses: Minimax-trained detectors (e.g., DataSentinel) that are adversarially optimized to withstand adaptive attacks, demonstrating lower false positive/negative rates than standard classifiers and achieving near-zero errors in robust benchmarks (Liu et al., 15 Apr 2025 ).

Cryptographic and Role-Based Defenses

  • Signed-Prompt: Trusted instructions are cryptographically signed or encoded with unique tokens; LLMs are engineered or fine-tuned to act only on authorized, signed instructions, blocking unsigned ones (Suo, 15 Jan 2024 ).
  • Structured Queries: Separating instructions and data through reserved token demarcation and model fine-tuning, ensuring data cannot mimic prompts and removing the ambiguity exploited by injection (Chen et al., 9 Feb 2024 ).

5. Defense Evaluation and Research Gaps

A critical evaluation reveals that early and some recent studies may overstate defense effectiveness by relying principally on narrow attacks, singular tasks, or only relative utility metrics. Principled assessment across both effectiveness (against existing and adaptive attacks) and absolute, general-purpose utility is necessary (Jia et al., 23 May 2025 ).

Key empirical results include:

  • Many defenses show good win rates but poor absolute utility—true task performance can degrade by 0.1–0.17 under defense even when relative rates appear favorable.
  • Optimization-based and adaptive attacks bypass most defenses entirely (ASV >0.7–1.0).
  • Detection-based approaches with high AUC may be impractical in deployment due to excessive false positives or failure under adaptive attacks.
  • Only proactive or game-theoretic detectors demonstrate low FNR/FPR in challenging, adversarial settings.
  • Effective defenses can reduce attack success to zero in narrow test settings, but no method is universally robust across all scenarios and task/attack diversity.

6. Benchmarking Infrastructure and Community Resources

To advance systematic paper, public benchmark platforms provide standardized datasets, attack/defense implementations, and computation scripts (e.g., Open-Prompt-Injection GitHub), enabling reproducible experiments and cross-institutional comparison (Liu et al., 2023 ). Such infrastructure underpins the creation of new evaluation metrics, including:

  • Attack Success Score (ASS)
  • Matching Rate (MR)
  • Attack Success Value (ASV)
  • False Positive/Negative Rates (FPR/FNR)
  • Absolute Utility Loss

Researchers are encouraged to employ these open resources, design adaptive/rich attack scenarios, and report both relative and absolute impact.

7. Future Directions

Identified gaps and challenges point toward several open research areas:

  • Optimization-Based and Adaptive Attack/Defense Co-evolution: Iterative ’red-team/blue-team’ methodologies are essential as attackers rapidly adapt.
  • Multi-modal and Agentic Scenarios: Evaluation and defense design must extend to vision-LLMs and tool-calling agents, where indirect, visual, or context-carrying prompt injections are highly effective despite current mitigations (Clusmann et al., 23 Jul 2024 , Alizadeh et al., 1 Jun 2025 ).
  • Instruction Reference and Output Filtering: Harnessing the LLM’s instruction-following strengths (e.g., by having the model self-identify which instruction it executed in responses) enables effective, low-utility-cost defense via output filtering (Chen et al., 29 Apr 2025 ).
  • Defense Evaluation Benchmarks: Comprehensive, diversified, and public benchmarks (e.g., NotInject for over-defense) are necessary for both attack and defense advancement (Li et al., 30 Oct 2024 ).
  • Layered Security and Usability Trade-offs: Multi-tiered defenses (system prompt, rule-based, LLM-based) balance usability and security, with ongoing optimization of false alarm rates and model performance under protection (Khomsky et al., 20 Jun 2024 ).

In summary, prompt injection attacks represent a fundamental and persistent threat to LLM-integrated applications. Their broad spectrum, adaptability, and the challenge of distinguishing data from instruction demand continued innovation in both formal benchmarking and robust, multi-faceted defenses. Consistent evaluation with respect to both resistance and utility is essential for the reliable deployment of LLM technologies in security-critical domains.