Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 84 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Kimi K2 229 tok/s Pro
2000 character limit reached

CTF-Dojo: Scalable CTF Training Platform

Updated 28 August 2025
  • CTF-Dojo is a containerized framework that automates and benchmarks cybersecurity challenges for both machine learning agents and human participants.
  • It employs CTF-Forge for rapid Docker-based environment generation, ensuring 98% reproducibility and setting up challenges in approximately 0.5 seconds each.
  • The platform enhances agent training through execution-grounded feedback loops and rejection sampling, achieving state-of-the-art results on CTF benchmarks.

CTF-Dojo is a large-scale, execution-grounded framework for training, evaluating, and benchmarking autonomous agents and human participants on cybersecurity tasks in Capture-The-Flag (CTF) formats. It integrates automated container generation, verified runtime environments, and verifiable feedback loops to support both machine learning agent training and human skill development across a broad spectrum of vulnerabilities and exploitation scenarios. Its design establishes a scalable platform for executable-agent learning, directly addressing limitations in previous agentic training approaches where real-world challenge environments were either inaccessible or difficult to reproduce.

1. Architecture and Execution Environment

CTF-Dojo comprises 658 fully functional CTF-style challenges, each containerized via Docker to ensure reproducibility and environmental fidelity (Zhuo et al., 25 Aug 2025). The challenges are sourced from curated archives (such as pwn.college) and range across classical binary exploitation, cryptography, web, and miscellaneous security domains. The system presents agents and users with live services for interaction—including dynamic binaries, exposed ports, and vulnerability endpoints—rather than static descriptions or synthetic interfaces.

The central orchestration is achieved via CTF-Forge, an automated pipeline that parses public artifacts, generates Dockerfiles, and assembles containerized challenge environments in approximately 0.5 seconds per challenge. This automation eliminates manual expert configuration traditionally required for environment setup, resulting in a 98% reproducibility rate after rigorous multi-run validation (Zhuo et al., 25 Aug 2025).

Key architectural features include:

  • Strict container-based isolation for each challenge instance
  • Automated port mapping and service verification before rollouts
  • Real-time trajectory logging, including all command inputs, outputs, and flag verification calls
  • Local runtime support for standard exploitation tooling (decompilers, debuggers, diagnostic utilities)

2. Agent Training Methodology

Agents, typically LLMs with support for multi-turn reasoning, operate within the CTF-Dojo runtime and are afforded a task-specific interaction budget (commonly up to 40 turns). The agent receives the challenge description, optional redacted writeups as hints, and a set of runtime tools. Each session is tracked at the granularity of system calls and outputs, producing rich, execution-verified trajectories.

Fine-tuning leverages these successful trajectories using a rejection sampling strategy: after each epoch, only those agent trajectories that successfully capture the flag are retained for further training. Hyperparameters include a fixed batch size, a modest learning rate (e.g., 5e-6), and linear scaling across model sizes (7B, 14B, 32B parameters). Runtime perturbations—including randomized port assignment, file system changes, and injected noise into service behavior—help prevent agent overfitting to static or overly deterministic cues.

This process has established new open-weight state-of-the-art results, such as a 32B agent reaching 31.9% Pass@1 and up to 11.6% absolute gains over strong baselines on established benchmarks (InterCode-CTF, NYU CTF Bench, Cybench) (Zhuo et al., 25 Aug 2025).

3. Automated Environment Generation (CTF-Forge)

CTF-Forge is a critical innovation for CTF-Dojo, enabling rapid and scalable environment creation. It parses challenge descriptions and associated artifacts, generating Docker configuration files, service definitions, and automated flag-checking scripts. CTF-Forge is capable of interpreting and replicating various flag verification logics, such as SHA-256 validation or complex check scripts tied to custom binary interfaces.

The pipeline supports:

Feature Implementation Impact
Dockerization LLM-driven parsing and config synthesis 0.5 sec/challenge setup
Flag checking Automated script generation High validation fidelity
Environment tests Online port/service checks 98% reproducibility

By abstracting away manual setup, CTF-Forge supports continuous expansion as new public artifacts become available.

4. Verification and Performance Metrics

Performance evaluation within CTF-Dojo is predicated on verifiable execution results rather than simulated completion. The Pass@1 metric is used to gauge the agent's ability to successfully capture the flag on the first attempt after training. Models are also tested on benchmark suites external to CTF-Dojo, confirming generalization.

Observed scaling behavior is linear with respect to the number of successful trajectories used for fine-tuning, emphasizing the efficiency and effectiveness of execution-grounded training signals (i.e., the impact of running real code and observing actual exploitation results).

Typical computational procedures encountered include modular arithmetic for cryptographic tasks (e.g., m=cdmodnm = c^d \bmod n, where dd is the modular inverse of ee with respect to φ(n)φ(n)), binary exploitation via input fuzzing, and multi-stage interaction with networked services.

5. Implications for Machine Learning and Cybersecurity Research

CTF-Dojo demonstrates that scalable, high-performance agentic training does not require costly proprietary systems, as competitive results can be achieved using hundreds (not thousands) of high-quality, execution-verified trajectories (Zhuo et al., 25 Aug 2025). The approach shows particular strength in learning complex, real-world exploitation techniques and reasoning under environmental variability—capabilities that static imitation learning or synthetic environments fail to produce.

A plausible implication is the feasibility of open-source, agent-driven CTF frameworks that rival proprietary solutions not only in accuracy but also in cost-effectiveness and deployment flexibility. The execution-grounded paradigm underlines the critical relationship between agent feedback and hands-on vulnerability discovery, with potential extensions toward live benchmarking, reinforcement learning for exploration, and continuous integration with emerging CTF challenge sets.

6. Extensions and Future Directions

The authors explicitly call for the development of live CTF benchmarks, enabling ongoing evaluation as new competition data is published. Prospective enhancements include integrating reinforcement learning signals, dynamic environment adaptation, and real-time agent-in-the-loop testing. Further, runtime augmentation (randomization of challenge characteristics) is expected to enhance generalization, permitting agents to avoid overfitting and develop more agile exploitation tactics.

Additional broad impact includes the potential for embedding CTF-Dojo into automated training and grading platforms, supporting both human and ML participant evaluation across a spectrum of cybersecurity scenarios.


CTF-Dojo, with its containerized challenges, automated environment generation, and execution-grounded feedback, defines a modern paradigm for cybersecurity agent training and evaluation. Its technical design and demonstrated empirical results establish it as a foundational tool for advancing both the research and practice of agentic learning in complex security domains (Zhuo et al., 25 Aug 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)