Papers
Topics
Authors
Recent
Search
2000 character limit reached

Inverted Generational Distance (IGD) in Optimization

Updated 13 April 2026
  • Inverted Generational Distance (IGD) is a metric that measures how closely and evenly candidate solutions approximate the true Pareto front in multi-objective optimization.
  • The approach involves constructing a uniformly distributed reference set and ranking solutions using proximity distances to guide selection.
  • IGD is integral in many-objective evolutionary algorithms, improving selection efficiency and robust performance across diverse problem benchmarks.

Inverted Generational Distance (IGD) is a widely recognized quantitative indicator employed for the concurrent assessment of convergence and diversity in multi-objective and many-objective evolutionary algorithms. IGD quantifies how closely and evenly a set of candidate solutions approximates the true or reference Pareto front. Within many-objective optimization frameworks, IGD serves not only as a performance measure but also as a central selection criterion, shaping population evolution by enforcing coverage of the entire objective space while guiding the search process towards Pareto optimality (Sun et al., 2018).

1. Formal Definition of IGD

Let PRmP^* \subset \mathbb{R}^m denote a set of reference points ideally distributed along the true Pareto front (PF), and let ARmA \subset \mathbb{R}^m represent the set of objective vectors yielded by the current population. The Inverted Generational Distance is mathematically defined as

IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)

where

d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)

is the Euclidean distance from reference point vv to its closest member in AA. A lower IGD value indicates that AA provides both accurate convergence to, and comprehensive coverage of, PP^*, ensuring closeness to the PF and even spread across the objective directions (Sun et al., 2018).

2. Construction of the Reference Set

The generation of a reference set PP^* is essential due to the unavailability of the true PF in practical many-objective problems. MaOEA/IGD (IGD-indicator-based Many-objective Evolutionary Algorithm) introduces a two-step protocol:

a) Decomposition-based Nadir Point Estimation (DNPE):

For an mm-objective minimization task with ARmA \subset \mathbb{R}^m0, each extreme point ARmA \subset \mathbb{R}^m1 is determined by solving

ARmA \subset \mathbb{R}^m2

with ARmA \subset \mathbb{R}^m3. This isolates each objective's extreme while penalizing suboptimality on other criteria. The nadir point is given by ARmA \subset \mathbb{R}^m4, while the ideal point is ARmA \subset \mathbb{R}^m5.

b) Sampling a Utopian Pareto Front:

  1. Generate ARmA \subset \mathbb{R}^m6 weight vectors ARmA \subset \mathbb{R}^m7 distributed uniformly on the unit simplex (ARmA \subset \mathbb{R}^m8), using the Das–Dennis method.
  2. Each ARmA \subset \mathbb{R}^m9 is mapped to the objective space via

IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)0

where IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)1 denotes component-wise multiplication, yielding a reference set IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)2 (Sun et al., 2018).

3. IGD-Guided Selection Mechanism

The IGD indicator shapes solution selection at each evolutionary generation. The procedure involves:

a) Rank Assignment:

Each candidate IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)3 is categorized into one of three ranks upon comparison with IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)4:

  • IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)5 ("better-than-PF"): IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)6 dominates at least one IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)7.
  • IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)8 ("straddling the PF"): IGD(P,A)=1PvPd(v,A)\mathrm{IGD}(P^*,A) = \frac{1}{|P^*|} \sum_{v \in P^*} d(v, A)9 is non-dominated with respect to all d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)0.
  • d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)1 ("worse-than-PF"): d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)2 is dominated by some d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)3.

b) Proximity Distance Assignment:

For each d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)4 and d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)5, assign a proximity d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)6 based on d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)7's rank:

  • For d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)8: d(v,A)=minuA(i=1m(viui)2)d(v, A) = \min_{u \in A} \left( \sqrt{\sum_{i=1}^m (v_i - u_i)^2} \right)9 (negative Euclidean)
  • For vv0: vv1 (IGD-plus style)
  • For vv2: vv3 (Euclidean) Overall proximity is vv4, facilitating tie-breaking within ranks.

c) Population Update via Linear Assignment:

Combined offspring and parent populations are partitioned by rank. To control population size, particularly when a front must be truncated, a linear assignment problem is solved between solutions and reference points, minimizing the sum of proximity distances. The Hungarian method ensures that each reference direction is represented at most once, enforcing both diversity and convergence at the selection stage (Sun et al., 2018).

4. Computational Efficiencies

Classical evolutionary algorithms often rely on exhaustive pairwise dominance comparisons, which require vv5 operations for vv6 solutions. MaOEA/IGD simplifies this by exclusively comparing candidate solutions to the fixed set vv7 (with cardinality typically vv8), reducing computational overhead. Because vv9 remains fixed per generation, precomputation and reuse of distance metrics are enabled, further enhancing algorithmic efficiency. The three-way rank plus proximity distance assignment guarantees Pareto-compliant pressure even in high-dimensional and heavily non-dominated objective spaces (Sun et al., 2018).

5. Empirical Role and Algorithmic Impact

The evenly distributed reference set AA0 imparts IGD with the ability to simultaneously measure and enforce both convergence and diversity. Extensive testing on the DTLZ and WFG benchmark suites, involving problems with 8, 15, and 20 objectives, placed MaOEA/IGD in the top tier (80–90% of cases) relative to NSGA-III, MOEA/D, HypE, RVEA, and KnEA. Its performance—assessed primarily through IGD and Hypervolume—demonstrated statistical competitiveness and, in many instances, superiority, particularly in representing varied PF geometries (linear, concave, convex). The DNPE approach to nadir point estimation required significantly fewer function evaluations and exhibited robustness against PF shape and objective scaling compared to alternate estimators such as WC-NSGA-II and PCSEA (Sun et al., 2018).

6. Synthesis and Research Significance

The IGD paradigm, particularly as instantiated in MaOEA/IGD, advances many-objective function optimization by:

  • Efficient nadir point decomposition and reference set construction,
  • Enforcing even coverage via simplex-projected weights,
  • Integrating global selection mechanisms that directly optimize for IGD minimization generation-by-generation.

A plausible implication is that such integration of IGD at the selection level makes the algorithm robust across diverse problem characteristics, without requiring problem-specific reference front information—a persistent challenge for many-objective evolutionary computation (Sun et al., 2018).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Inverted Generational Distance (IGD).