Papers
Topics
Authors
Recent
Search
2000 character limit reached

Variable-Depth NSGA-II Algorithm

Updated 20 January 2026
  • The paper introduces a variable-depth NSGA-II algorithm that evolves quantum circuit ansätze with dynamic layer coding to balance noise robustness and hardware cost.
  • It formulates a multiobjective problem minimizing performance under noisy conditions and hardware resource use, with tailored selection, crossover, and mutation strategies.
  • A hybrid Hamiltonian ε-greedy parameter-sharing mechanism accelerates convergence and maintains gradient diversity for robust quantum circuit optimization.

The variable-depth NSGA-II algorithm is an enhanced evolutionary search method tailored to multiobjective quantum architecture search (QAS), where candidate parameterized quantum circuits (ansätze) are automatically synthesized under noisy hardware constraints. This approach extends the classical NSGA-II (Non-dominated Sorting Genetic Algorithm II) to operate efficiently over circuit populations of variable depth, optimizing simultaneously for noise-robust expressibility and quantum hardware cost. Integration of a hybrid Hamiltonian ε-greedy parameter-sharing mechanism further accelerates convergence and maintains gradient diversity. The methodology has been validated for quantum binary and multi-class classification under realistic noise, yielding resource-efficient, high-performing quantum architectures (Li et al., 16 Jan 2026).

1. Variable-Depth Encoding of Quantum Circuits

Each individual in the population represents a candidate quantum circuit encoded as an ordered vector of ll quantum “layers,” where ll ranges within [lmin,lmax][l_{\min},\,l_{\max}] and can mutate during evolution. Each layer is selected from the layer search space

Slayer=Rspace×CNOTspace,\mathcal S_{\rm layer} = R_{\rm space} \times \mathrm{CNOT}_{\rm space},

with Rspace={Rx,Ry,Rz}nR_{\rm space} = \{R_x, R_y, R_z\}^n for all combinations of single-qubit Pauli rotations on nn qubits, and CNOTspace\mathrm{CNOT}_{\rm space} consisting of all 2n(n1)/22^{n(n-1)/2} possible CNOT gate subsets. An individual is encoded as

A=(g1,g2,,gl),gi{0,,Slayer1},l[lmin,lmax]A = (g_1,\,g_2,\,\dots,\,g_l), \quad g_i\in\{0, \dots, |\mathcal S_{\rm layer}|-1\}, \quad l\in[l_{\min},l_{\max}]

allowing for efficient insertions and deletions of circuit layers. This flexible integer-vector encoding facilitates the evolution of architectures with dynamically varying depths, directly reflecting quantum resource tradeoffs.

2. Multiobjective Formulation

The QAS problem is formalized as a two-objective minimization over the expressive power of the quantum ansatz under noise and the associated hardware cost. The fitness vector reads: F(A)=(E(A),C(A))F(A) = (\mathcal E(A),\, \mathcal C(A)) where

  • Expressibility / task performance under noise:

E(A)=minθψ(θ,A)H^taskψ(θ,A)\mathcal E(A) = \min_{\theta} \langle \psi(\theta,A)|\hat H_{\rm task}|\psi(\theta,A)\rangle

is the expectation value under a noise model, with task Hamiltonian H^task\hat H_{\rm task}.

  • Hardware cost:

C(A)=αNCNOT(A)+βNdepth(A)\mathcal C(A) = \alpha\,N_{\rm CNOT}(A) + \beta\,N_{\rm depth}(A)

aggregates the number of CNOT gates and circuit depth, weighted by tunable coefficients α,β\alpha,\beta.

Both objectives are minimized, ensuring that the discovered circuits are both high-performing and resource-efficient in noisy quantum environments.

3. Algorithmic Framework

The enhanced variable-depth NSGA-II follows the broad structure of classical NSGA-II but introduces several key modifications for quantum circuit search and noise-awareness:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Function NA-QAS(N_pop, G_max, [l_min,l_max], θ_shared, H_task, 𝒩)
 1. Initialize P₀ by sampling l∈[l_min,l_max] and drawing g_j from 𝓢_layer
 2. Set global Pareto front PF ← ∅
 3. For t = 0 to G_max-1:
    a) Fine-tune circuit weights for each A ∈ P_t (hybrid Hamiltonian, noise-in-the-loop)
    b) Evaluate fitness F(A) = (𝓔, 𝓒)
    c) Non-dominated sort, assign crowding distances
    d) Generate offspring through:
        - Tournament selection
        - Simulated binary crossover (SBX) on integer vectors
        - Polynomial mutation
        - Quantum mutations: insert/delete/replace layer
    e) Form R_t = P_t ∪ Q_t, elitist selection, update PF
 4. Return PF

Fitness evaluations employ fine-tuning of shared parameters under noise, with the hybrid Hamiltonian strategy ensuring robust adaptation. Variation operators act natively on variable-length encodings, and elitist, crowding-aware selection preserves global diversity along the Pareto front.

4. Genetic Operators for Variable-Length Individuals

Distinct genetic operators are defined to support variable-depth circuit evolution:

  • Initialization: Each individual is assigned a random depth l[lmin,lmax]l\in[l_{\min}, l_{\max}], and each layer gjg_j is uniformly sampled from Slayer\mathcal S_{\rm layer}.
  • Selection: Tournament selection is performed using standard NSGA-II rank and crowding distance metrics, fully compatible with variable-length individuals.
  • Crossover (SBX): Parents A=(g1,,glA)A = (g_1, \dots, g_{l_A}), B=(h1,,hlB)B = (h_1, \dots, h_{l_B}) are aligned up to min(lA,lB)\min(l_A, l_B); excess genes are stochastically copied or discarded, constraining offspring depth within bounds.
  • Mutation:
    • Polynomial mutation: Each gene undergoes perturbation as per the polynomial mutation distribution.
    • Depth-specific mutations:
    • 1. InsertLayer: Insert a fresh layer at a random position, ll+1l\mapsto l+1 (if l<lmaxl<l_{\max}).
    • 2. DeleteLayer: Randomly remove a layer, ll1l\mapsto l-1 (if l>lminl>l_{\min}).
    • 3. ReplaceLayer: Replace an existing layer with a randomly sampled new one.

Variable-depth mutation induces exploration across architectures with distinct hardware footprints and functional expressibilities.

5. Hybrid Hamiltonian ε-Greedy Parameter-Sharing

Fine-tuning of circuit parameters leverages an ε-greedy strategy to mitigate the risk of parameter collapse (barren plateau) and reduce redundant retraining. All individuals share a global parameter set θshared\theta_{\rm shared}, together with small “supernet” classical heads Ek(x)=Wkx+bkE_k(x) = W_k x + b_k, k=1,,Kk=1,\dots,K. During fine-tuning:

  1. For ansatz AA, evaluate ψ(θ,A)H^kψ(θ,A)\langle\psi(\theta, A)|\hat H_k|\psi(\theta,A)\rangle for all kk; denote the best index k=argminkk^* = \arg\min_k.
  2. With probability 1ε1-\varepsilon, update using the best Hamiltonian H^k\hat H_{k^*}; with probability ε\varepsilon, update using a uniform mixture H^unif\hat H_{\rm unif}:

H^mix=(1ε)H^k+εH^unif\hat H_{\rm mix} = (1-\varepsilon)\hat H_{k^*} + \varepsilon \hat H_{\rm unif}

  1. Parameters are updated by gradient descent on

ψ(θ,A)H^mixψ(θ,A)\langle\psi(\theta,A)|\hat H_{\rm mix}|\psi(\theta,A)\rangle

  1. Only the selected supernet head (Wk,bk)(W_{k}, b_{k}) is updated.

This approach fosters exploration while concentrating exploitation on the most promising task Hamiltonian, leading to improved robustness and training efficiency under noise.

6. Computational Complexity and Convergence

The dominant computational cost per generation arises from fine-tuning parameters for all NpopN_{\rm pop} individuals over TT gradient steps, each involving quantum circuit simulation under noise of cost CqC_q, for a total O(NpopTCq)O(N_{\rm pop}\, T\, C_q). Non-dominated sorting scales as O(Npop2)O(N_{\rm pop}^2). Allowing variable depth expands the search cardinality from Slayerl|\mathcal S_{\rm layer}|^l (fixed ll) to

l=lminlmaxSlayerl\sum_{l=l_{\min}}^{l_{\max}} |\mathcal S_{\rm layer}|^l

while the specialized genetic operators maintain tractability and population diversity. The ε-greedy hybrid Hamiltonian stabilizes parameter trajectories, improving the convergence of ansatz fine-tuning. NSGA-II elitism and crowding distance further guarantee monotonic improvement and spread of the Pareto front. Reported convergence is typically achieved in Gmax=50G_{\max}=50–$200$ generations on standard variational quantum algorithm (VQA) benchmarks, with wall-clock time reduced by an order of magnitude relative to naïve retraining (Li et al., 16 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Variable-Depth NSGA-II Algorithm.