Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application? (2308.01990v3)

Published 3 Aug 2023 in cs.CR

Abstract: LLMs have found widespread applications in various domains, including web applications, where they facilitate human interaction via chatbots with natural language interfaces. Internally, aided by an LLM-integration middleware such as Langchain, user prompts are translated into SQL queries used by the LLM to provide meaningful responses to users. However, unsanitized user prompts can lead to SQL injection attacks, potentially compromising the security of the database. Despite the growing interest in prompt injection vulnerabilities targeting LLMs, the specific risks of generating SQL injection attacks through prompt injections have not been extensively studied. In this paper, we present a comprehensive examination of prompt-to-SQL (P$_2$SQL) injections targeting web applications based on the Langchain framework. Using Langchain as our case study, we characterize P$_2$SQL injections, exploring their variants and impact on application security through multiple concrete examples. Furthermore, we evaluate 7 state-of-the-art LLMs, demonstrating the pervasiveness of P$_2$SQL attacks across LLMs. Our findings indicate that LLM-integrated applications based on Langchain are highly susceptible to P$_2$SQL injection attacks, warranting the adoption of robust defenses. To counter these attacks, we propose four effective defense techniques that can be integrated as extensions to the Langchain framework. We validate the defenses through an experimental evaluation with a real-world use case application.

Exploring and Mitigating Prompt-to-SQL Injection Vulnerabilities in LLM-Integrated Web Applications

Introduction

LLMs have surged in adoption for various web applications, notably enhancing the capabilities of chatbots and virtual assistants with natural language interfaces. This paper undertakes a thorough examination of the potential security breaches introduced by incorporating LLMs into web applications, specifically focusing on the vulnerabilities related to prompt-to-SQL (P2_2SQL) injections within the context of the Langchain middleware. The research characterizes the nature and implications of such attacks, evaluates the susceptibility across different LLM technologies, and proposes a suite of defenses tailored to mitigate these risks.

P2_2SQL Injection Attack Variants (RQ1)

The paper identified and detailed four main classes of P2_2SQL injection attacks, differentiated by their methods and objectives:

  • Unrestricted prompting attacks directly manipulate the chatbot into executing malicious SQL queries by crafting the user's input.
  • Direct attacks on restricted prompting demonstrated that even when prompted instructions include explicit restrictions against certain SQL operations, there exist crafted inputs capable of bypassing these safeguards.
  • Indirect attacks showed that malicious prompt fragments could be inserted into the database by an attacker, subsequently altering the chatbot's behavior when interacting with other users.
  • Injected multi-step query attacks notably highlighted the incremental danger when assistants utilize multiple SQL queries to address a single question, enabling complex attack strategies like account hijacking.

P2_2SQL Injections across Models (RQ2)

The research extended to evaluate the pervasiveness of P2_2SQL vulnerabilities across seven LLMs, including both proprietary models like GPT-4 and open-access models such as Llama 2. It was discovered that, except for a few models exhibiting inconsistent behavior (e.g., Tulu and Guanaco), all tested LLMs remained susceptible to various degrees of P2_2SQL injection attacks, including bypassing restrictions on SQL operations and accessing unauthorized data.

Mitigating P2_2SQL Injections (RQ3)

To counter P2_2SQL attacks, the paper proposed and evaluated four distinct defense mechanisms:

  • Database permission hardening leveraged role-based access controls at the database level to effectively limit the capability of the chatbot to perform only read operations, directly mitigating writes violations.
  • SQL query rewriting programmatically altered generated SQL queries to ensure compliance with access restrictions, showing particular effectiveness against confidentiality breaches.
  • Preloading data into the LLM prompt served as a preventive measure by including all necessary user data in the LLM prompt, thereby obviating the requirement for additional database queries susceptible to attack.
  • Auxiliary LLM Guard involved employing a secondary LLM instance tasked with inspecting SQL query results for potential injection attacks, albeit with acknowledged limitations in detection accuracy and potential for circumvention.

Conclusion

The research unequivocally demonstrates that LLM-integrated applications, while enhancing usability and functionality through natural language processing capabilities, introduce significant security vulnerabilities manifested in the form of P2_2SQL injection attacks. Through comprehensive analysis, the paper not only sheds light on these vulnerabilities but also contributes practical defenses to ameliorate the risks they present. Nonetheless, the evolving nature of LLMs and their integration patterns necessitates ongoing vigilance and further research to identify emerging vulnerabilities and refine mitigation strategies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rodrigo Pedro (1 paper)
  2. Daniel Castro (48 papers)
  3. Paulo Carreira (1 paper)
  4. Nuno Santos (26 papers)
Citations (46)
X Twitter Logo Streamline Icon: https://streamlinehq.com