Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Signed-Prompt: A New Approach to Prevent Prompt Injection Attacks Against LLM-Integrated Applications (2401.07612v1)

Published 15 Jan 2024 in cs.CR and cs.AI

Abstract: The critical challenge of prompt injection attacks in LLMs integrated applications, a growing concern in the AI field. Such attacks, which manipulate LLMs through natural language inputs, pose a significant threat to the security of these applications. Traditional defense strategies, including output and input filtering, as well as delimiter use, have proven inadequate. This paper introduces the 'Signed-Prompt' method as a novel solution. The study involves signing sensitive instructions within command segments by authorized users, enabling the LLM to discern trusted instruction sources. The paper presents a comprehensive analysis of prompt injection attack patterns, followed by a detailed explanation of the Signed-Prompt concept, including its basic architecture and implementation through both prompt engineering and fine-tuning of LLMs. Experiments demonstrate the effectiveness of the Signed-Prompt method, showing substantial resistance to various types of prompt injection attacks, thus validating its potential as a robust defense strategy in AI security.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Xuchen Suo (1 paper)
Citations (18)
X Twitter Logo Streamline Icon: https://streamlinehq.com