Papers
Topics
Authors
Recent
Search
2000 character limit reached

HoneyGAN Pots: Realistic Honeypot Generation

Updated 15 January 2026
  • HoneyGAN Pots is an end-to-end system that uses generative adversarial networks to autonomously synthesize realistic honeypot configurations for cyber defense.
  • The system employs both unconditional and conditional GAN models to generate plausible OS, port, service, and CPE combinations with high fidelity and diversity.
  • HoneyGAN Pots outperforms traditional manual catalog methods by enabling rapid synthesis, scalability, and effective deception against automated reconnaissance tools.

HoneyGAN Pots is an end-to-end system that leverages Generative Adversarial Networks (GANs) to autonomously synthesize large numbers of realistic honeypot configurations for cyber defense. Targeting the critical operational challenge of deploying diverse, up-to-date decoy systems, HoneyGAN Pots learns the joint structure of real-world device configurations and supports both unconditional and conditional generation, addressing the limitations of conventional catalog- or image-driven approaches (Gabrys et al., 2024).

1. Motivation and Problem Definition

Traditional low-interaction honeypots are instantiated via manual selection and maintenance of catalogs comprising operating system (OS) versions, open port/service sets, and Common Platform Enumeration (CPE) combinations. This methodology poses two persistent challenges: (i) determining which decoy configurations to deploy at any given time, and (ii) generating novel, plausible configurations capable of deceiving sophisticated reconnaissance tools. As attack surfaces evolve, static lists rapidly become obsolete; new services and firmware must be incorporated by hand, fundamentally restricting scalability and adaptability. Existing practices either hard-code templates or store large, rigid VM images, lacking mechanisms for automatic adaptation to a defender’s environment or for generating diverse, realistic decoys at the scales demanded by large networks (Gabrys et al., 2024).

2. System Architecture

HoneyGAN Pots comprises three closely related GAN variants:

  • Unconditional GAN: Learns the empirical joint distribution over (OS, build, port → service/CPE) feature vectors.
  • Conditional O/S-GAN: Allows conditional generation with a specified OS label.
  • Conditional Device-Type (DT) GAN: Enables conditional sampling to generate configurations for specific high-level device roles (e.g., webserver, file-sharing, VPN gateway).

Configurations are encoded as 64×3264 \times 32 binary feature maps. The encoding scheme assigns columns 1–2 to the one-hot coded OS and build (32 rows each), while columns 3–32 represent the 30 most common ports. Within each of these columns, two bits are set to one: one marking the running service (rows 1–32), the other indicating its CPE (rows 33–64).

Discriminator (DD):

  • Accepts a 64×3264 \times 32 binary map.
  • Comprises one standard convolutional layer (kernel 5×55 \times 5), followed by four downsampling convolutional layers (stride $2$, kernel 5×55 \times 5, LeakyReLU activations), and a final dense layer outputting a real/fake score.

Generator (GG):

  • Accepts a 100-dimensional noise vector zz (optionally concatenated with a label for conditional models).
  • Passes through four upsampling+convolution blocks: nearest-neighbor upsampling by 2×2 \times, 3×33 \times 3 convolution, BatchNorm, ReLU.
  • Final 3×33 \times 3 convolution with sigmoid activation produces the 64×3264 \times 32 map with entries in [0,1][0,1].

Both networks are trained within the Wasserstein GAN with Gradient Penalty (WGAN-GP) paradigm (Gabrys et al., 2024).

3. Adversarial Objective and Mathematical Formalism

HoneyGAN Pots uses the following adversarial objective provided by the WGAN-GP loss:

minGmaxDDExpr[D(x)]Ezpz[D(G(z))]+λEx^px^[(x^D(x^)21)2]\min_G \max_{D \in \mathcal{D}} \mathbb{E}_{x \sim p_{r}}\left[D(x)\right] - \mathbb{E}_{z \sim p_{z}}\left[D(G(z))\right] + \lambda\, \mathbb{E}_{\hat{x} \sim p_{\hat{x}}}\left[\left(\lVert \nabla_{\hat{x}} D(\hat{x}) \rVert_2 - 1\right)^2\right]

where prp_r denotes the empirical distribution of real Shodan configurations, pzp_z denotes the standard normal prior, and λ=10\lambda=10 is the gradient penalty coefficient.

For the conditional GANs, each real sample xx is paired with its label yy (either OS or device type), and GG and DD are both given yy in addition to zz or xx.

4. Data, Training, and Hyperparameters

The HoneyGAN Pots system was trained on 378,973 unique, internet-connected device records scraped from Shodan, each recording OS, build string, open ports, service names, and CPE identifiers. Preprocessing steps included mapping the top 30 ports by frequency to fixed feature columns, one-hot encoding services and CPEs, and truncating to 64 rows.

Hyperparameters:

  • Batch size: 64
  • Optimizer: Adam (learning rate 2×1042 \times 10^{-4}, β1=0.5\beta_1=0.5, β2=0.9\beta_2=0.9)
  • WGAN-GP gradient penalty coefficient: λ=10\lambda=10

Training schedule:

  • Unconditional GAN: 11,844 generator update steps (3 discriminator updates per step)
  • Conditional O/S GAN: 5,922 generator steps
  • Conditional DT GAN: 23,688 generator steps

Extending training beyond these schedules yielded no further improvements in sample quality or diversity (Gabrys et al., 2024).

5. Evaluation: Authenticity, Diversity, and Deception Effectiveness

Authenticity and Diversity:

The system utilized the precision–recall distribution (PRD) framework of Sajjadi et al. to jointly measure on-manifold fidelity and coverage. For unconditional GAN, recall was 0.75 at precision over 0.80; conditional GANs traded some sample quality for label control (O/S-GAN: at recall=0.75, precision≈0.55; DT-GAN: precision≈0.75).

Tables of finite-sample diversity indicated that, out of 5,000 generated configurations, 1,209 were unique and 2,514 matched an observed Shodan configuration; the baseline of 5,000 real samples contained 2,844 uniques.

Model Unique Configs (5k) Matches Real Records
Unconditional GAN 1,209 2,514
Real Shodan (baseline) 2,844

Deception Effectiveness:

Generated decoys were embedded into HoneyD for deployment as low-interaction honeypots. The Checkpot “Karma” metric, designed to measure realism under automated scan-based detection, showed that HoneyGAN Pots decoys (avg. 491.6 Karma) were nearly indistinguishable from decoys generated from real Shodan data (504.9), and vastly outperformed off-the-shelf public honeypots (60) (Gabrys et al., 2024).

6. Configuration Synthesis and Deployment

HoneyGAN Pots produces output in the form of JSON-style configuration objects combining syntactically valid and semantically coherent OS, port, service, and CPE fields. For example:

Sample 1 (Unconditional GAN):

1
2
3
4
5
6
7
{
  "os": "Windows Server 2019",
  "open_ports": [
    {"port": 80,  "service": "http", "cpe": "cpe:/o:microsoft:windows_server:2019"},
    {"port": 3389,"service": "rdp",  "cpe": "cpe:/a:microsoft:remote_desktop"}
  ]
}
Sample 2 (Conditional DT = “webserver”):

1
2
3
4
5
6
7
{
  "os": "Ubuntu 20.04",
  "open_ports": [
    {"port": 22,   "service": "ssh",    "cpe": "cpe:/a:open_ssh:ssh"},
    {"port": 443,  "service": "http",   "cpe": "cpe:/a:nginx:nginx"}
  ]
}
These configurations, while never observed in the training set, are plausible to automated scanning and reconnaissance routines. HoneyGAN Pots can be integrated with honeyd and similar daemons that accept JSON or template configurations. A “decoy-farm” controller can dynamically sample from G(z)G(z) and launch new decoy instances as needed. Once the generator is trained, synthesis incurs sub-millisecond latency per configuration on commodity GPUs or tens of milliseconds on CPUs, enabling production of thousands of decoys per hour. The generator network is 1–2 MB, with an inference footprint under 100 MB of RAM. Training requires only a single GPU for a few hours (Gabrys et al., 2024).

7. Comparative Assessment with Prior Approaches

Prior honeypot configuration paradigms depend on defender-maintained static lists or monolithic VM image collections. These do not automatically capture shifting distributions in the threat landscape, produce combinatorial novelty, or scale efficiently. HoneyGAN Pots:

  • Learns the empirical joint distribution of (OS, build, port → service/CPE) from real-world deployments.
  • Supports both unconstrained and conditional (OS, device-type) sample generation without new template engineering.
  • Produces decoys that are evaluated as nearly indistinguishable from genuine devices by automated tools.
  • Allows blue-team operators to deploy hundreds or thousands of diverse, high-realism honeypots by simple generator sampling, obviating brittle manual processes and maintaining pace with attacker tactics (Gabrys et al., 2024).
Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to HoneyGAN Pots.