Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prompt Injection attack against LLM-integrated Applications (2306.05499v2)

Published 8 Jun 2023 in cs.CR, cs.AI, cs.CL, and cs.SE
Prompt Injection attack against LLM-integrated Applications

Abstract: LLMs, renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation.

Overview of Prompt Injection in LLM-integrated Applications

The paper "Prompt Injection Attack Against LLM-integrated Applications" addresses a significant security concern stemming from the integration of LLMs like GPT-4, LLaMA, and PaLM2 into a myriad of applications. These integrations, while advantageous in augmenting the capabilities of digital assistants and other AI-driven services, also open new vectors for security threats—specifically, prompt injection attacks. This paper systematically explores this vulnerability and introduces HouYi, a novel, adaptable method for executing black-box prompt injection attacks.

Prompt injection attacks exploit the manner in which LLMs interpret prompts, allowing malicious actors to override preset instructions and manipulate application outcomes. The researchers conduct an exploratory analysis of 36 real-world LLM-integrated applications, uncovering a substantial susceptibility to such attacks. Remarkably, 31 applications prove vulnerable to HouYi, a method inspired by traditional web injection strategies, marking it as a critical tool in evaluating the resilience of current prompt-based AI systems.

Key Contributions

  1. Comprehensive Investigation of Real-world Vulnerabilities: The paper shines a light on the potential risks of integrating LLMs into applications by evaluating 36 commercial services. Their findings are striking—over 86% of these applications can be compromised using prompt injection, underscoring an urgent need for enhanced security measures.
  2. Development of HouYi: The researchers introduce HouYi, a novel, iterative, black-box prompt injection technique that draws parallels to SQL injection and Cross-site Scripting (XSS) attacks. HouYi breaks down into a pre-constructed prompt, a separator for context partition, and a malicious payload, ensuring enhanced effectiveness compared to previous heuristic methods. The approach's reliance on an LLM for context inference and payload generation represents a significant advance in automating and optimizing the attack process.
  3. Illustrative Case Studies and Numerical Analysis: The paper validates its findings through detailed case studies, demonstrating the real-world applicability of HouYi, with vendors like Notion corroborating the vulnerabilities identified. Importantly, the researchers provide quantitative assessments indicating the financial and operational impacts of such attacks, such as the potential exploitation of resources leading to millions in losses.

Implications and Future Directions

The implications of this paper for AI and cybersecurity are profound. By clearly delineating the vulnerabilities present in most LLM-integrated applications, the authors highlight the critical need for robust defenses. Current mitigation strategies are insufficient against advanced techniques like HouYi, suggesting that developers and researchers must innovate beyond traditional input sanitization and format enforcement strategies.

For future work, exploring more sophisticated detection and prevention methodologies, such as dynamic context evaluation and real-time behavioral analysis, could be pivotal. Furthermore, advancing secure prompt engineering and developing frameworks for continuous monitoring and adaptation in AI systems could mitigate such threats.

In conclusion, this research provides a pivotal step towards understanding and combatting prompt injection attacks in LLM-integrated applications. The introduction of HouYi exposes the wide-reaching consequences of these vulnerabilities, setting the stage for further exploration and development of robust security frameworks. As AI continues to permeate everyday applications, ensuring the integrity and security of these systems is imperative for maintaining user trust and safeguarding information.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yi Liu (543 papers)
  2. Gelei Deng (35 papers)
  3. Yuekang Li (34 papers)
  4. Kailong Wang (41 papers)
  5. Tianwei Zhang (199 papers)
  6. Yepang Liu (35 papers)
  7. Haoyu Wang (309 papers)
  8. Yan Zheng (102 papers)
  9. Yang Liu (2253 papers)
  10. Zihao Wang (216 papers)
  11. Xiaofeng Wang (310 papers)
Citations (240)
Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com