Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs) (2407.14937v1)

Published 20 Jul 2024 in cs.CL and cs.CR

Abstract: Creating secure and resilient applications with LLMs (LLM) requires anticipating, adjusting to, and countering unforeseen threats. Red-teaming has emerged as a critical technique for identifying vulnerabilities in real-world LLM implementations. This paper presents a detailed threat model and provides a systematization of knowledge (SoK) of red-teaming attacks on LLMs. We develop a taxonomy of attacks based on the stages of the LLM development and deployment process and extract various insights from previous research. In addition, we compile methods for defense and practical red-teaming strategies for practitioners. By delineating prominent attack motifs and shedding light on various entry points, this paper provides a framework for improving the security and robustness of LLM-based systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Apurv Verma (9 papers)
  2. Satyapriya Krishna (27 papers)
  3. Sebastian Gehrmann (48 papers)
  4. Madhavan Seshadri (4 papers)
  5. Anu Pradhan (2 papers)
  6. Tom Ault (1 paper)
  7. Leslie Barrett (3 papers)
  8. David Rabinowitz (23 papers)
  9. John Doucette (2 papers)
  10. NhatHai Phan (26 papers)
Citations (3)