Inverted Generational Distance (IGD) in Optimization
- Inverted Generational Distance (IGD) is a metric that measures how closely and evenly candidate solutions approximate the true Pareto front in multi-objective optimization.
- The approach involves constructing a uniformly distributed reference set and ranking solutions using proximity distances to guide selection.
- IGD is integral in many-objective evolutionary algorithms, improving selection efficiency and robust performance across diverse problem benchmarks.
Inverted Generational Distance (IGD) is a widely recognized quantitative indicator employed for the concurrent assessment of convergence and diversity in multi-objective and many-objective evolutionary algorithms. IGD quantifies how closely and evenly a set of candidate solutions approximates the true or reference Pareto front. Within many-objective optimization frameworks, IGD serves not only as a performance measure but also as a central selection criterion, shaping population evolution by enforcing coverage of the entire objective space while guiding the search process towards Pareto optimality (Sun et al., 2018).
1. Formal Definition of IGD
Let denote a set of reference points ideally distributed along the true Pareto front (PF), and let represent the set of objective vectors yielded by the current population. The Inverted Generational Distance is mathematically defined as
where
is the Euclidean distance from reference point to its closest member in . A lower IGD value indicates that provides both accurate convergence to, and comprehensive coverage of, , ensuring closeness to the PF and even spread across the objective directions (Sun et al., 2018).
2. Construction of the Reference Set
The generation of a reference set is essential due to the unavailability of the true PF in practical many-objective problems. MaOEA/IGD (IGD-indicator-based Many-objective Evolutionary Algorithm) introduces a two-step protocol:
a) Decomposition-based Nadir Point Estimation (DNPE):
For an -objective minimization task with 0, each extreme point 1 is determined by solving
2
with 3. This isolates each objective's extreme while penalizing suboptimality on other criteria. The nadir point is given by 4, while the ideal point is 5.
b) Sampling a Utopian Pareto Front:
- Generate 6 weight vectors 7 distributed uniformly on the unit simplex (8), using the Das–Dennis method.
- Each 9 is mapped to the objective space via
0
where 1 denotes component-wise multiplication, yielding a reference set 2 (Sun et al., 2018).
3. IGD-Guided Selection Mechanism
The IGD indicator shapes solution selection at each evolutionary generation. The procedure involves:
a) Rank Assignment:
Each candidate 3 is categorized into one of three ranks upon comparison with 4:
- 5 ("better-than-PF"): 6 dominates at least one 7.
- 8 ("straddling the PF"): 9 is non-dominated with respect to all 0.
- 1 ("worse-than-PF"): 2 is dominated by some 3.
b) Proximity Distance Assignment:
For each 4 and 5, assign a proximity 6 based on 7's rank:
- For 8: 9 (negative Euclidean)
- For 0: 1 (IGD-plus style)
- For 2: 3 (Euclidean) Overall proximity is 4, facilitating tie-breaking within ranks.
c) Population Update via Linear Assignment:
Combined offspring and parent populations are partitioned by rank. To control population size, particularly when a front must be truncated, a linear assignment problem is solved between solutions and reference points, minimizing the sum of proximity distances. The Hungarian method ensures that each reference direction is represented at most once, enforcing both diversity and convergence at the selection stage (Sun et al., 2018).
4. Computational Efficiencies
Classical evolutionary algorithms often rely on exhaustive pairwise dominance comparisons, which require 5 operations for 6 solutions. MaOEA/IGD simplifies this by exclusively comparing candidate solutions to the fixed set 7 (with cardinality typically 8), reducing computational overhead. Because 9 remains fixed per generation, precomputation and reuse of distance metrics are enabled, further enhancing algorithmic efficiency. The three-way rank plus proximity distance assignment guarantees Pareto-compliant pressure even in high-dimensional and heavily non-dominated objective spaces (Sun et al., 2018).
5. Empirical Role and Algorithmic Impact
The evenly distributed reference set 0 imparts IGD with the ability to simultaneously measure and enforce both convergence and diversity. Extensive testing on the DTLZ and WFG benchmark suites, involving problems with 8, 15, and 20 objectives, placed MaOEA/IGD in the top tier (80–90% of cases) relative to NSGA-III, MOEA/D, HypE, RVEA, and KnEA. Its performance—assessed primarily through IGD and Hypervolume—demonstrated statistical competitiveness and, in many instances, superiority, particularly in representing varied PF geometries (linear, concave, convex). The DNPE approach to nadir point estimation required significantly fewer function evaluations and exhibited robustness against PF shape and objective scaling compared to alternate estimators such as WC-NSGA-II and PCSEA (Sun et al., 2018).
6. Synthesis and Research Significance
The IGD paradigm, particularly as instantiated in MaOEA/IGD, advances many-objective function optimization by:
- Efficient nadir point decomposition and reference set construction,
- Enforcing even coverage via simplex-projected weights,
- Integrating global selection mechanisms that directly optimize for IGD minimization generation-by-generation.
A plausible implication is that such integration of IGD at the selection level makes the algorithm robust across diverse problem characteristics, without requiring problem-specific reference front information—a persistent challenge for many-objective evolutionary computation (Sun et al., 2018).