Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

PACEbench: A Framework for Evaluating Practical AI Cyber-Exploitation Capabilities (2510.11688v1)

Published 13 Oct 2025 in cs.CR and cs.AI

Abstract: The increasing autonomy of LLMs necessitates a rigorous evaluation of their potential to aid in cyber offense. Existing benchmarks often lack real-world complexity and are thus unable to accurately assess LLMs' cybersecurity capabilities. To address this gap, we introduce PACEbench, a practical AI cyber-exploitation benchmark built on the principles of realistic vulnerability difficulty, environmental complexity, and cyber defenses. Specifically, PACEbench comprises four scenarios spanning single, blended, chained, and defense vulnerability exploitations. To handle these complex challenges, we propose PACEagent, a novel agent that emulates human penetration testers by supporting multi-phase reconnaissance, analysis, and exploitation. Extensive experiments with seven frontier LLMs demonstrate that current models struggle with complex cyber scenarios, and none can bypass defenses. These findings suggest that current models do not yet pose a generalized cyber offense threat. Nonetheless, our work provides a robust benchmark to guide the trustworthy development of future models.

Summary

  • The paper introduces PACEbench, a benchmark that evaluates AI agents' cyber-exploitation capabilities using real-world scenarios with varying complexity.
  • It details PACEagent, a modular framework integrating an LLM core, tool module, and memory module for phased and strategic penetration testing.
  • Experimental results show that while closed-source LLMs perform better than open-source ones, none can bypass advanced defenses like production-grade WAFs.

PACEbench: A Comprehensive Benchmark for Evaluating Practical AI Cyber-Exploitation Capabilities

Motivation and Benchmark Design Principles

The increasing autonomy and tool-use capabilities of LLMs have raised concerns about their potential for automating sophisticated cyber offense. Existing evaluation frameworks, primarily based on CTF-style challenges, are limited by their artificial simplicity and lack of real-world complexity. PACEbench addresses these deficiencies by introducing a benchmark grounded in three core principles: vulnerability difficulty, environmental complexity, and the presence of cyber defenses. This design enables a more realistic and granular assessment of LLM-driven agents' cyber-exploitation capabilities.

PACEbench comprises four escalating scenarios:

  • A-CVE: Single, real-world CVE exploitation on a single host, with difficulty quantified by human pass rates.
  • B-CVE: Blended environments with both vulnerable and benign hosts, requiring reconnaissance and target discrimination.
  • C-CVE: Chained exploitation across multiple hosts, necessitating lateral movement and privilege escalation.
  • D-CVE: Exploitation in the presence of production-grade WAFs, demanding defense evasion or novel bypass techniques. Figure 1

    Figure 1: An overview of PACEbench. In this benchmark, an agent's score is a function of both task-specific difficulty and the complexity of the scenario, which scales from isolated vulnerabilities to complex environments.

Compared to prior benchmarks, PACEbench introduces blended and defended scenarios, simulating the uncertainty and layered defenses of real-world networks. Figure 2

Figure 2: Comparison of cybersecurity benchmarks. PACEbench (center) incorporates complex elements like a WAF and multiple hosts, offering a more realistic simulation than traditional CTFs (right).

PACEagent: Architecture and Workflow

To effectively tackle the challenges posed by PACEbench, the authors introduce PACEagent, a modular agent framework that emulates the operational workflow of human penetration testers. The architecture consists of:

  • LLM Core: Responsible for high-level reasoning, strategic planning, and phase management (reconnaissance, analysis, exploitation).
  • Tool Module: Orchestrates both local and external cybersecurity tools via a tool router and the Model Context Protocol (MCP).
  • Memory Module: Maintains a summarized history of interactions, enabling long-horizon reasoning and efficient context management.

The agent operates in a loop, iteratively analyzing the environment, planning actions, executing tools, and updating memory until objectives are met or a step limit is reached. Figure 3

Figure 3: The architecture of the PACEagent framework, highlighting the phase manager, tools router, and memory module as key enhancements for cybersecurity operations.

This structured, multi-phase approach enables more robust exploration and exploitation in complex, multi-stage environments.

Experimental Evaluation

Setup

Seven LLMs (four proprietary: Claude-3.7-Sonnet, Gemini-2.5-Flash, GPT-5-mini, o4-mini; three open-source: Deepseek-v3, Deepseek-r1, Qwen3-32B) were evaluated using both PACEagent and the CAI agent framework. The primary metric is the PACEbench score, a weighted sum of success rates across all four scenario types, using a Pass@5 criterion.

Results and Analysis

  • Overall Performance: No model achieved a PACEbench score above 0.241 (Claude-3.7-Sonnet). All models failed to bypass any WAF in D-CVE scenarios.
  • Vulnerability Difficulty: Success rates decline as CVE difficulty increases (measured by human pass rate). Notably, some vulnerabilities difficult for humans were solved by agents, likely due to LLMs' ability to rapidly generate and test payloads. Figure 4

Figure 4

Figure 4: Count of successful exploiting model across CVE difficulty levels, as measured by human pass rate.

  • Environmental Complexity: Introduction of benign hosts (B-CVE) and chained attack paths (C-CVE) significantly degraded agent performance. Agents often failed at reconnaissance, lateral movement, or privilege escalation steps.
  • Cyber Defenses: No model succeeded in bypassing any WAF-protected scenario, indicating a current inability to autonomously defeat standard cyber defenses. Figure 5

    Figure 5: In the C-CVE setup, the agent must pivot from the front network to the internal network, simulating realistic lateral movement constraints.

  • Closed vs. Open-Source Models: Closed-source models outperformed open-source counterparts, primarily due to larger context windows and more advanced capabilities. Open-source models were bottlenecked by context length, failing in multi-stage tasks. Figure 6

    Figure 6: Performance of PACEagent across challenges in PACEbench. Green: Pass@5; Orange: partial success; Red: failure.

  • Agent Architecture Comparison: PACEagent outperformed CAI by 65.2% in total PACEbench score, at the cost of 28% higher token usage. The structured, multi-phase workflow and MCP integration were critical for improved performance.

Implications and Future Directions

The results demonstrate that current LLMs do not pose a generalized autonomous cyber offense threat. Even the best-performing models are limited to isolated, simple vulnerabilities and are unable to handle realistic, multi-stage attacks or bypass modern defenses. This provides a clear baseline for tracking future advances and highlights the need for continued monitoring as LLM capabilities evolve.

From a practical perspective, PACEbench offers a robust methodology for pre-deployment risk assessment of LLM-driven agents in cybersecurity contexts. The modular design of PACEagent, particularly its phase management and memory mechanisms, provides a blueprint for developing more capable and auditable autonomous agents.

Theoretically, the findings suggest that scaling LLMs alone is insufficient for mastering complex cyber exploitation; advances in long-horizon planning, tool integration, and context management are required. The dual-use dilemma is underscored: while current models are not yet a major threat, future improvements could rapidly change the risk landscape, necessitating proactive governance and ethical research focus.

Future work should expand PACEbench to include binary exploitation, increase the diversity and scale of vulnerabilities, and further investigate the integration of advanced safety mechanisms in LLMs.

Conclusion

PACEbench establishes a new standard for evaluating the practical cyber-exploitation capabilities of AI agents, grounded in real-world complexity and defense scenarios. The empirical results highlight the current limitations of LLM-driven agents and provide a rigorous framework for tracking progress and ensuring the safe deployment of future AI systems in cybersecurity domains.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 1 like.

Upgrade to Pro to view all of the tweets about this paper:

Youtube Logo Streamline Icon: https://streamlinehq.com