Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming (2311.06237v3)

Published 10 Nov 2023 in cs.CL, cs.CR, and cs.HC

Abstract: Engaging in the deliberate generation of abnormal outputs from LLMs by attacking them is a novel human activity. This paper presents a thorough exposition of how and why people perform such attacks, defining LLM red-teaming based on extensive and diverse evidence. Using a formal qualitative methodology, we interviewed dozens of practitioners from a broad range of backgrounds, all contributors to this novel work of attempting to cause LLMs to fail. We focused on the research questions of defining LLM red teaming, uncovering the motivations and goals for performing the activity, and characterizing the strategies people use when attacking LLMs. Based on the data, LLM red teaming is defined as a limit-seeking, non-malicious, manual activity, which depends highly on a team-effort and an alchemist mindset. It is highly intrinsically motivated by curiosity, fun, and to some degrees by concerns for various harms of deploying LLMs. We identify a taxonomy of 12 strategies and 35 different techniques of attacking LLMs. These findings are presented as a comprehensive grounded theory of how and why people attack LLMs: LLM red teaming.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nanna Inie (9 papers)
  2. Jonathan Stray (9 papers)
  3. Leon Derczynski (48 papers)
Citations (10)

Summary

A Grounded Theory of LLM Red Teaming in the Wild

The paper "Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild" presents an empirical exploration of the burgeoning practice of adversarial testing of LLMs. By employing a rigorous qualitative methodology, the authors illuminate the motivations, strategies, and communal dynamics underpinning this emergent practice of 'red teaming.' This investigation is captured through an analysis based on interviews with practitioners actively engaging in designing inputs that provoke LLMs into producing unexpected outputs.

Core Phenomenon: Red Teaming LLMs

Red teaming captures the essence of probing LLMs with adversarial prompts, seeking to induce output misalignments. This practice, originally a security term implying team-based efforts to explore vulnerabilities, has been adapted to fit the sphere of LLMs, exhibiting five specific characteristics. It is limit-seeking, involves vanilla (non-malicious) attacks, relies on manual processes, benefits from communal effort, and requires a mindset reminiscent of alchemical practices. By manually crafting inputs to undermine the integrity of LLM outputs, red teamers operate with an exploratory zeal akin to laboratory experimenters testing the bounds of syntactic and semantic defenses.

Motivations and Context

The practitioners of red teaming are driven by a blend of intrinsic motivations—including curiosity, the pursuit of fun, and societal concerns regarding AI ethics—and extrinsic motivations associated with professional interests and social capital. These motivations spur a wide array of goals, ranging from achieving specific adversarial outputs to contributing knowledge to the community and honing professional skill sets. This motivation fosters a vibrant community, notable for its openness and collaboration, as red teamers frequently share their findings and techniques with each other via online platforms and social media.

Strategies and Techniques

The taxonomy of strategies elucidated in this paper distinguishes between language manipulation, rhetorical tactics, creation of hypothetical scenarios (possible worlds), fictionalizing narratives, and employing stratagems. Language strategies involve syntactic and semantic manipulations—such as using formalized code or obscure token sequences—to bypass filters. Rhetoric-based strategies parallel human persuasion, employing methods such as incremental escalation or reverse psychology. In crafting possible worlds, the red teamers establish contexts that suspend conventional rules, allowing the model to generate unconventional results safely. Such strategies often involve role-playing, where the model assumes a character free of typical constraints. Lastly, stratagems focus on non-traditional methods like repeating commands to force variable responses.

Implications and Future Directions

The paper importantly highlights that while threats posed by LLMs—like bias or data leakage—can be unearthed through red teaming, such activities are often temporal. The constant evolution of LLMs necessitates a dynamic approach, where strategies and techniques used today may not hold in the near future. Although some risk that successful jailbreaking might mask the need for public vigilance against potential AI harms exists, the paper argues that awareness must remain acute as LLM systems evolve.

Furthermore, this research lays a foundation for refining adversarial methodologies into more structured forms, potentially integrating aspects of these exploratory practices into formal safety rigor within AI model development processes. Future research might foster deep dives into the sociotechnical interplay between AI models and society, with an eye towards synthesizing adversarial findings into actionable insights for developing robust, ethically aligned AI systems.

Overall, this paper provides a structured overview of LLM red teaming as it currently stands, encapsulating both the vibrant dynamics of the practice and its potential implications for the development and deployment of secure AI technologies. As LLMs continue to proliferate, understanding and mitigating their vulnerabilities through such adversarial engagement remains indispensable.