Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 56 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration (2503.15754v1)

Published 20 Mar 2025 in cs.CR and cs.AI

Abstract: As LLMs become increasingly capable, security and safety evaluation are crucial. While current red teaming approaches have made strides in assessing LLM vulnerabilities, they often rely heavily on human input and lack comprehensive coverage of emerging attack vectors. This paper introduces AutoRedTeamer, a novel framework for fully automated, end-to-end red teaming against LLMs. AutoRedTeamer combines a multi-agent architecture with a memory-guided attack selection mechanism to enable continuous discovery and integration of new attack vectors. The dual-agent framework consists of a red teaming agent that can operate from high-level risk categories alone to generate and execute test cases and a strategy proposer agent that autonomously discovers and implements new attacks by analyzing recent research. This modular design allows AutoRedTeamer to adapt to emerging threats while maintaining strong performance on existing attack vectors. We demonstrate AutoRedTeamer's effectiveness across diverse evaluation settings, achieving 20% higher attack success rates on HarmBench against Llama-3.1-70B while reducing computational costs by 46% compared to existing approaches. AutoRedTeamer also matches the diversity of human-curated benchmarks in generating test cases, providing a comprehensive, scalable, and continuously evolving framework for evaluating the security of AI systems.

Summary

  • The paper introduces a fully automated lifelong red teaming system that dynamically discovers, validates, and integrates novel attack vectors.
  • It employs a dual-agent architecture with a strategy proposer and red teaming agent to optimize risk analysis and context-aware test case generation.
  • Empirical evaluations reveal up to 0.82 ASR on HarmBench and a 46% reduction in computational cost by leveraging memory-guided attack selection.

AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration

Motivation and Problem Statement

The increasing deployment of LLMs in high-stakes domains necessitates robust, scalable, and adaptive red teaming methodologies to systematically uncover vulnerabilities, especially as new attack vectors emerge. Manual red teaming, while effective, is not scalable and fails to provide comprehensive coverage of the risk landscape. Existing automated approaches are limited by static prompt sets, lack of integration of novel attack strategies, and insufficient memory mechanisms for tracking attack efficacy. AutoRedTeamer addresses these limitations by introducing a fully automated, lifelong red teaming framework that integrates continuous attack discovery, memory-guided attack selection, and dynamic test case generation.

System Architecture and Methodology

AutoRedTeamer is structured as a dual-agent system: a strategy proposer agent for attack discovery and integration, and a red teaming agent for systematic evaluation. The architecture is modular, supporting both high-level risk category inputs and specific behavioral prompts. Figure 1

Figure 1: AutoRedTeamer combines automated red teaming evaluation (top) with lifelong attack integration (bottom).

The strategy proposer agent autonomously expands the attack library by mining recent research literature, scoring candidate attacks for novelty and effectiveness, and implementing promising strategies within black-box constraints. This agent not only incorporates published attacks but also synthesizes new attack vectors by recombining core principles from the literature, ensuring coverage of both known and emergent vulnerabilities.

The red teaming agent orchestrates the evaluation pipeline:

  • Risk Analyzer decomposes user inputs (risk categories or specific scenarios) into actionable components, ensuring comprehensive scenario coverage.
  • Seed Prompt Generator produces diverse, contextually varied test cases, maintaining semantic relevance and diversity.
  • Strategy Designer selects attack vectors or combinations based on historical performance, leveraging a memory system that tracks attack efficacy, query cost, and usage statistics.
  • Memory System maintains both long-term and short-term records of attack/test case trajectories, enabling experience-driven optimization of attack selection.
  • Evaluator and Relevance Checker ensure that generated outputs are both effective (i.e., elicit undesired model behavior) and remain within the intended risk scope.

This architecture enables AutoRedTeamer to operate in two modes: (1) maximizing attack success rate (ASR) for specific prompts, and (2) generating diverse, risk-category-driven test cases for comprehensive model assessment.

Attack Discovery, Integration, and Test Case Generation

The attack discovery pipeline is fully automated. The strategy proposer agent queries academic APIs, scores papers, and implements attacks as modular Python classes. Each attack is validated on a subset of HarmBench; only those exceeding a 30% ASR threshold are integrated. The memory system tracks both standalone and combinatorial attack performance, enabling the Strategy Designer to exploit synergies between attack vectors.

Test case generation is dynamic and context-aware. For risk categories, the Risk Analyzer identifies subcomponents and manifestations across contexts (e.g., demographic targeting, coded language for "Hate speech"), which the Seed Prompt Generator then instantiates as concrete, diverse prompts. Failed or low-efficacy test cases are automatically refined or replaced, ensuring the test set evolves to overcome model defenses. Figure 2

Figure 2: Example trajectory of a test case generated and refined by AutoRedTeamer, demonstrating the effectiveness of attack vector combinations.

Empirical Evaluation and Results

AutoRedTeamer is evaluated on two axes: (1) attack success rate on HarmBench prompts across multiple LLMs, and (2) diversity and effectiveness of test cases generated from AIR risk categories.

  • On HarmBench, AutoRedTeamer achieves 0.82 ASR on Llama-3.1-70B, outperforming PAIR (0.60), TAP (0.60), and Rainbow Teaming (0.18), while requiring significantly fewer queries than optimization-based baselines. On Claude-3.5-Sonnet, AutoRedTeamer is the only method to achieve nontrivial ASR (0.28), with all other baselines near zero.
  • The framework demonstrates 46% reduction in computational cost compared to state-of-the-art agent-based approaches, due to its memory-guided attack selection and efficient test case refinement.
  • When initialized with only agent-proposed attacks, AutoRedTeamer achieves 0.78 ASR, compared to 0.75 with only human attacks, indicating the effectiveness of its autonomous attack discovery pipeline.

For risk-category-driven evaluation, AutoRedTeamer generates test cases that match or exceed the diversity of human-curated AIR-Bench prompts, while achieving higher ASR across 43 AIR level-3 categories. Figure 3

Figure 3: ASR across 43 AIR level-3 categories on AIR-Bench (top) and AutoRedTeamer (bottom), demonstrating superior coverage and effectiveness.

Embedding analysis of generated prompts shows that AutoRedTeamer covers a broader region of the semantic space than PAIR and closely matches the distribution of human-curated prompts. Figure 4

Figure 4: Visualization of final test case embeddings for AIR-Bench, AutoRedTeamer, and PAIR, highlighting the diversity and human-likeness of AutoRedTeamer's outputs.

Analysis and Implications

AutoRedTeamer's memory-guided attack selection and lifelong attack integration enable it to adapt to evolving model defenses and regulatory requirements. The framework's ability to generate diverse, contextually relevant test cases from high-level risk categories is critical for regulatory compliance and comprehensive risk assessment. The empirical results demonstrate that combining attack vectors is consistently more effective than individual attacks, and that automated attack discovery can match or exceed the performance of human experts.

The modular design allows for extensibility to new domains (e.g., multimodal models, agentic systems) and integration with external evaluation pipelines. However, the reliance on LLM-based components introduces potential biases and limitations, as the quality of attack discovery and test case generation is bounded by the capabilities of the underlying models.

Future Directions

Potential extensions include:

  • Generalization to multimodal and embodied agent settings, where attack surfaces are more complex.
  • Integration with defense and mitigation pipelines for closed-loop security evaluation.
  • Development of transferability metrics to assess the generalizability of discovered vulnerabilities across model architectures and deployment contexts.
  • Exploration of adversarial co-evolution, where defense mechanisms adapt in response to AutoRedTeamer's evolving attack strategies.

Conclusion

AutoRedTeamer establishes a new paradigm for automated, lifelong red teaming of LLMs by integrating autonomous attack discovery, memory-guided attack selection, and dynamic test case generation. The framework achieves superior attack success rates and diversity compared to both static and agent-based baselines, while maintaining computational efficiency. Its design principles and empirical results have significant implications for the future of AI safety evaluation, regulatory compliance, and the ongoing adversarial dynamics between attack and defense in large-scale AI systems.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.