Golden Eagle Genetic Optimization (GEGO)
- GEGO is a hybrid metaheuristic that integrates physics-inspired attack-cruise dynamics with genetic operators to enhance search in high-dimensional spaces.
- It maintains population diversity by embedding selection, crossover, and mutation into the standard Golden Eagle Optimization framework.
- Benchmark results demonstrate GEGO’s improved performance in hyperparameter tuning and fog computing scheduling compared to traditional methods.
Golden Eagle Genetic Optimization (GEGO) is a hybrid metaheuristic optimization framework developed to address computationally expensive and high-dimensional search problems where classic derivative-based or single-mechanism metaheuristics often underperform. GEGO was explicitly designed for tasks such as hyperparameter tuning under resource constraints and discrete combinatorial scheduling in fog computing, integrating the physics-inspired population movement of Golden Eagle Optimization (GEO) with genetic algorithm (GA) operators—selection, crossover, and mutation. The approach aims to enforce sustained population diversity throughout the search, inhibit premature convergence typical of swarm-based algorithms, and provide robust exploration-exploitation dynamics with minimal computational overhead (Nazarians et al., 21 Jan 2026, Sirjani et al., 9 Sep 2025).
1. Theoretical Foundations and Hybrid Motivation
Golden Eagle Optimization (GEO) is a swarm-based metaheuristic algorithm inspired by the hunting strategies of golden eagles, alternating between exploration (“cruise”) and exploitation (“attack”) in the solution space. GEO demonstrates strong global search capability in continuous domains but often suffers from loss of diversity and premature convergence, especially in later iterations or under small population/resource constraints.
Genetic Algorithms (GA), in contrast, use explicit selection, crossover, and mutation to maintain genetic diversity and allow for long-range jumps within the search space, but can lack effective guidance toward promising regions, leading to inefficiency in purely high-dimensional, continuous, or rugged landscapes. Hybridizing GEO with embedded genetic operations addresses these drawbacks by periodically injecting diversity and global information exchange directly into the optimizing population, thus balancing exploitation and exploration more effectively within limited computational budgets (Nazarians et al., 21 Jan 2026).
2. Algorithmic Structure and Mathematical Formulation
GEGO maintains a population of individuals in -dimensional search space. The core GEGO iterative loop comprises the following components:
- Attack–Cruise Update (Standard GEO):
- Each solution updates its position using two principal vectors:
- Attack vector: , where is a randomly selected “prey” (other population member)
- Cruise vector: is orthogonal to , introducing lateral exploration.
- Displacement step:
- Each solution updates its position using two principal vectors:
- Time-varying attack and cruise coefficients gradually modulate exploration versus exploitation.
- Periodic Genetic Phase (Embedded Operators):
- Every iterations, the current solutions are encoded as binary chromosomes.
- Selection uses binary tournaments: from population, pairs are drawn; fitter individuals are parents.
- Crossover (linear, rate ): Offspring are created as ().
- Mutation (rate ): Each bit has probability of flipping; bit-flips decode to perturbations in continuous space.
- Offspring replace parents if they confer fitness improvement.
- Genetic operator frequency and rates are hyperparameters and allow explicit diversity regulation (Nazarians et al., 21 Jan 2026).
The pseudocode, initialization and update steps, and per-stage computational complexity are formalized to match standard O() cost per iteration, where denotes fitness evaluation, ensuring practical compatibility with standalone GA and GEO (see table below).
| Algorithm | Complexity per Iteration | Diversity Source |
|---|---|---|
| GEO | O() | Stochastic movement |
| GA | O() | Selection/crossover/mutation |
| GEGO | O()+O(genetic) | Both (embedded genetic ops) |
3. Parameterization, Computational Characteristics, and Implementation
GEGO retains low parameterization overhead with the following tunable hyperparameters:
- : population size (20–50 typically)
- : maximum iterations (100–1000)
- , : attack propensity (e.g., [0.5, 2])
- , : cruise propensity (e.g., [1, 0.5])
- : genetic phase frequency (e.g., every 3–5 iterations)
- : crossover probability ()
- : mutation probability (low, e.g., 0.001 per bit)
The embedding of genetic operators does not alter asymptotic computational complexity relative to GEO or GA alone, incurring only minor constant-factor overhead from encoding/decoding and bitwise genetic manipulations. In discrete combinatorial scheduling, such as task allocation in fog computing, GEGO (introduced as IGEO in (Sirjani et al., 9 Sep 2025)) adapts by using hybrid GEO/genetic moves to generate discrete assignments, exploiting mutation for exploration and crossover for exploitation contingent on the sign of each step vector component.
4. Empirical Performance on Benchmarks and Real-World Problems
Comprehensive benchmarking on the CEC2017 suite and practical application to neural network hyperparameter tuning and fog computing scheduling tasks demonstrate that GEGO consistently outperforms both constituent algorithms in robustness and solution quality.
CEC2017 Suite Results (Selected):
- On 30D unimodal/multimodal benchmarks (N=20, T=100), GEGO achieves lower or comparable mean and standard deviation of best-found objective values compared to GEO and GA (e.g., EggHolder: GEGO vs. GEO , GA ).
- On 100D composite functions (N=50, T=1000), GEGO outperforms GEO in 8/10 functions and consistently surpasses GA (e.g., CF3: GEGO , GEO , GA ) (Nazarians et al., 21 Jan 2026).
Hyperparameter Tuning (MNIST, D=10):
- Under N=10, T=15, GEGO yields highest mean test accuracy () and lowest loss (0.06575), surpassing both GEO and GA across all ten trials.
- Standard deviation is higher for GEGO, reflecting greater exploration and an ability to escape local optima where standalone GEO stagnates.
Fog Computing Task Scheduling:
- Genetic-discretized GEO (IGEO) improves convergence by 15–20% over pure GEO in discrete assignment.
- In 600-task scheduling on 20 heterogeneous fog nodes, IGEO reduces total energy consumption ( kJ vs. ETFC kJ) and deadline violation time (IGEO ms, ETFC ms), with further reductions under the RL-augmented RIGEO extension (energy kJ, deadline violation ms) (Sirjani et al., 9 Sep 2025).
5. Exploration–Exploitation Balance and Diversity Maintenance
Periodic embedding of genetic crossover and mutation enables GEGO to sustain high population diversity. Crossover acts as a global information exchange mechanism, while mutation provides stochastic exploration, serving as an effective stagnation breaker. Empirical evidence includes the observed avoidance of stagnation in multimodal functions and broader exploration in hyperparameter configurations for neural networks. A plausible implication is that the higher standard deviation in GEGO optimization outcomes, relative to GEO, signifies more effective exploration of the search space and an increased probability of identifying better global optima (Nazarians et al., 21 Jan 2026).
6. Extensions: Discrete Domains and Reinforcement Learning Integration
Editor’s term: GEGO (Discrete) or IGEO denotes the adaptation to combinatorial scheduling, where genetic operators are conditioned on the attack–cruise update directionality. Negative step components trigger mutation for exploration, while positive ones prefer crossover for exploitation.
The Reinforcement Improved GEO (RIGEO) extension integrates reinforcement learning (RL) for situations where rapid task deadline satisfaction is mandatory. RIGEO classifies fog nodes by traffic; low-traffic nodes utilize IGEO, and high-traffic, high-criticality tasks are scheduled by a Q-table-based RL agent, resulting in further improvements in system response time and energy under mixed workload conditions (Sirjani et al., 9 Sep 2025).
7. Limitations and Prospects
GEGO demonstrates robust performance across mixed continuous/discrete optimization scenarios and constrained-resource regimes, but shows mixed comparative performance versus more adaptive differentially evolving metaheuristics such as L-SHADE particularly for very large composite fitness landscapes.
Future research directions include large-scale statistical studies with formal Wilcoxon/Friedman significance testing, adaptation to deeper model architectures and larger datasets (e.g., CIFAR, ImageNet), and dynamic self-tuning of genetic operator frequencies and rates, as well as further investigation into hybrid metaheuristic–RL systems for distributed optimization in edge and fog computing (Nazarians et al., 21 Jan 2026, Sirjani et al., 9 Sep 2025).