Variable-Depth NSGA-II Algorithm
- The paper introduces a variable-depth NSGA-II algorithm that evolves quantum circuit ansätze with dynamic layer coding to balance noise robustness and hardware cost.
- It formulates a multiobjective problem minimizing performance under noisy conditions and hardware resource use, with tailored selection, crossover, and mutation strategies.
- A hybrid Hamiltonian ε-greedy parameter-sharing mechanism accelerates convergence and maintains gradient diversity for robust quantum circuit optimization.
The variable-depth NSGA-II algorithm is an enhanced evolutionary search method tailored to multiobjective quantum architecture search (QAS), where candidate parameterized quantum circuits (ansätze) are automatically synthesized under noisy hardware constraints. This approach extends the classical NSGA-II (Non-dominated Sorting Genetic Algorithm II) to operate efficiently over circuit populations of variable depth, optimizing simultaneously for noise-robust expressibility and quantum hardware cost. Integration of a hybrid Hamiltonian ε-greedy parameter-sharing mechanism further accelerates convergence and maintains gradient diversity. The methodology has been validated for quantum binary and multi-class classification under realistic noise, yielding resource-efficient, high-performing quantum architectures (Li et al., 16 Jan 2026).
1. Variable-Depth Encoding of Quantum Circuits
Each individual in the population represents a candidate quantum circuit encoded as an ordered vector of quantum “layers,” where ranges within and can mutate during evolution. Each layer is selected from the layer search space
with for all combinations of single-qubit Pauli rotations on qubits, and consisting of all possible CNOT gate subsets. An individual is encoded as
allowing for efficient insertions and deletions of circuit layers. This flexible integer-vector encoding facilitates the evolution of architectures with dynamically varying depths, directly reflecting quantum resource tradeoffs.
2. Multiobjective Formulation
The QAS problem is formalized as a two-objective minimization over the expressive power of the quantum ansatz under noise and the associated hardware cost. The fitness vector reads: where
- Expressibility / task performance under noise:
is the expectation value under a noise model, with task Hamiltonian .
- Hardware cost:
aggregates the number of CNOT gates and circuit depth, weighted by tunable coefficients .
Both objectives are minimized, ensuring that the discovered circuits are both high-performing and resource-efficient in noisy quantum environments.
3. Algorithmic Framework
The enhanced variable-depth NSGA-II follows the broad structure of classical NSGA-II but introduces several key modifications for quantum circuit search and noise-awareness:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
Function NA-QAS(N_pop, G_max, [l_min,l_max], θ_shared, H_task, 𝒩)
1. Initialize P₀ by sampling l∈[l_min,l_max] and drawing g_j from 𝓢_layer
2. Set global Pareto front PF ← ∅
3. For t = 0 to G_max-1:
a) Fine-tune circuit weights for each A ∈ P_t (hybrid Hamiltonian, noise-in-the-loop)
b) Evaluate fitness F(A) = (𝓔, 𝓒)
c) Non-dominated sort, assign crowding distances
d) Generate offspring through:
- Tournament selection
- Simulated binary crossover (SBX) on integer vectors
- Polynomial mutation
- Quantum mutations: insert/delete/replace layer
e) Form R_t = P_t ∪ Q_t, elitist selection, update PF
4. Return PF |
Fitness evaluations employ fine-tuning of shared parameters under noise, with the hybrid Hamiltonian strategy ensuring robust adaptation. Variation operators act natively on variable-length encodings, and elitist, crowding-aware selection preserves global diversity along the Pareto front.
4. Genetic Operators for Variable-Length Individuals
Distinct genetic operators are defined to support variable-depth circuit evolution:
- Initialization: Each individual is assigned a random depth , and each layer is uniformly sampled from .
- Selection: Tournament selection is performed using standard NSGA-II rank and crowding distance metrics, fully compatible with variable-length individuals.
- Crossover (SBX): Parents , are aligned up to ; excess genes are stochastically copied or discarded, constraining offspring depth within bounds.
- Mutation:
- Polynomial mutation: Each gene undergoes perturbation as per the polynomial mutation distribution.
- Depth-specific mutations:
- 1. InsertLayer: Insert a fresh layer at a random position, (if ).
- 2. DeleteLayer: Randomly remove a layer, (if ).
- 3. ReplaceLayer: Replace an existing layer with a randomly sampled new one.
Variable-depth mutation induces exploration across architectures with distinct hardware footprints and functional expressibilities.
5. Hybrid Hamiltonian ε-Greedy Parameter-Sharing
Fine-tuning of circuit parameters leverages an ε-greedy strategy to mitigate the risk of parameter collapse (barren plateau) and reduce redundant retraining. All individuals share a global parameter set , together with small “supernet” classical heads , . During fine-tuning:
- For ansatz , evaluate for all ; denote the best index .
- With probability , update using the best Hamiltonian ; with probability , update using a uniform mixture :
- Parameters are updated by gradient descent on
- Only the selected supernet head is updated.
This approach fosters exploration while concentrating exploitation on the most promising task Hamiltonian, leading to improved robustness and training efficiency under noise.
6. Computational Complexity and Convergence
The dominant computational cost per generation arises from fine-tuning parameters for all individuals over gradient steps, each involving quantum circuit simulation under noise of cost , for a total . Non-dominated sorting scales as . Allowing variable depth expands the search cardinality from (fixed ) to
while the specialized genetic operators maintain tractability and population diversity. The ε-greedy hybrid Hamiltonian stabilizes parameter trajectories, improving the convergence of ansatz fine-tuning. NSGA-II elitism and crowding distance further guarantee monotonic improvement and spread of the Pareto front. Reported convergence is typically achieved in –$200$ generations on standard variational quantum algorithm (VQA) benchmarks, with wall-clock time reduced by an order of magnitude relative to naïve retraining (Li et al., 16 Jan 2026).