Noise-Aware Quantum Architecture Search
- Noise-Aware Quantum Architecture Search (NA-QAS) is a methodology that incorporates realistic noise models and hardware constraints to optimize quantum circuit architectures.
- It employs advanced search strategies—including reinforcement learning, evolutionary multi-objective optimization, and sparse exploration—to jointly optimize circuit topology and parameters.
- Empirical benchmarks demonstrate NA-QAS significantly reduces circuit depth and gate usage while improving convergence speed and accuracy for quantum chemistry and QML tasks.
Noise-Aware Quantum Architecture Search (NA-QAS) refers to a class of algorithms and frameworks designed to automatically discover quantum circuit architectures that are optimized for both problem-specific expressivity and resilience to hardware-induced noise. NA-QAS methods explicitly incorporate device noise models, decoherence, and hardware constraints into the search process for parameterized quantum circuits (PQCs) used in variational quantum algorithms (VQAs) and quantum machine learning (QML). This is in contrast to traditional quantum architecture search (QAS), which often assumes a noiseless environment, and thus may return structures that are either intractable or suboptimal under realistic noisy intermediate-scale quantum (NISQ) hardware (Du et al., 2020, Patel et al., 2024, Kundu, 2024, Li et al., 16 Jan 2026, Chen et al., 2024, Ye et al., 2021).
1. Problem Formulation and Search Space
NA-QAS formalizes the simultaneous optimization of quantum circuit topology (architecture) and its continuous parameters under a noise model. The archetypal objective can be stated as
where indexes a circuit structure (ansatz) from the architecture pool , represents the circuit’s variational parameters, is a loss functional that combines task accuracy (e.g., VQE energy, QML classification loss) and an explicit noise penalty, and encapsulates the noise map for architecture (Du et al., 2020, Li et al., 16 Jan 2026).
The circuit space may support:
- Structured layers with rotations and entanglement sublayers (Li et al., 16 Jan 2026)
- Variable circuit depth, controlled per instance (Li et al., 16 Jan 2026)
- Binary tensor encodings to represent gate placement and connectivity (Patel et al., 2024, Kundu, 2024)
- Sparse or dynamic architectures incorporating gate growth and pruning (Chen et al., 2024)
The bi-objective variant introduces hardware-expressivity cost (
); the search then seeks Pareto fronts between target performance (e.g., ground state energy, classification accuracy) and hardware overhead (Li et al., 16 Jan 2026).
2. Noise Modeling and Simulation
NA-QAS integrates explicit, gate-local noise descriptions, operating at the level of
- Depolarizing channels:
where is the gate infidelity and is the number of qubits (Du et al., 2020, Patel et al., 2024, Ye et al., 2021)
- Bit-flip channels:
- Phase-damping and amplitude damping (T1, T2 relaxation) (Li et al., 16 Jan 2026, Kundu, 2024)
- Measurement/readout errors and crosstalk, using IBMQ or real-device calibration data (Du et al., 2020, Patel et al., 2024, Kundu, 2024, Chen et al., 2024)
Efficient simulation is achieved via Pauli-transfer matrices (PTMs) in the Liouville basis, fusing gate unitaries and noise superoperators to accelerate state propagation during search (up to 6× over Kraus-based simulators) (Patel et al., 2024, Kundu, 2024). For on-chip or "real noise" scenarios, experiments operate directly on hardware, leveraging device-provided error models (Chen et al., 2024).
3. Search Algorithms and Optimization Strategies
NA-QAS instantiates several algorithmic paradigms:
a) Reinforcement Learning (RL)–Assisted QAS:
- State: Tensor encoding of partial circuit plus noise or cost summaries (Patel et al., 2024, Kundu, 2024)
- Action: Discrete gate placements (including choice of qubit, rotation axis, or CNOT connectivity)
- Reward: Composite of task loss and noise-penalizing term. For instance:
(Patel et al., 2024), or
- RL Policy: Double Deep Q-Network (DDQN), curriculum learning (moving-threshold), and random-halting (negative-binomial episode truncation) are used to promote rapid, noise-efficient circuit discovery (Patel et al., 2024, Kundu, 2024). Continual reinforcement learning (Probabilistic Policy Reuse with DQN) has been proposed for rapidly adapting to changing noise conditions (Ye et al., 2021).
b) Evolutionary/Multi-Objective Search:
- Enhanced NSGA-II with variable depth and Pareto sorting on target performance and hardware cost (Li et al., 16 Jan 2026)
- Hybrid Hamiltonian ε-greedy parameter sharing across "supernets" to amortize parameter learning and break local optima (Li et al., 16 Jan 2026)
c) In-Time Sparse Exploration ("QuantumSEA"):
- Interleaved gate pruning (via salience scores) and gate growth (via historical gradient averages and randomization)
- Joint topology and parameter optimization under explicit noise constraints and hardware execution budgets (Chen et al., 2024)
d) Joint Structure-Parameter "Supernet" Optimization:
- Weight-sharing among a sampled batch of circuit "supernets," combined with adversarial allocation based on noisy loss (Du et al., 2020, Li et al., 16 Jan 2026). Avoids quadratic cost scaling of separately-trained ansätze.
e) SPSA and Adam-based Parameter Training:
- Multi-stage, shot-robust SPSA variants for parameter optimization within the architecture search loop, leveraging Adam moment updates and staged measurement budgets (Patel et al., 2024).
4. Curriculum and Halting Mechanisms
To both favor shorter, less noise-prone circuits and guide the agent's search, NA-QAS frameworks implement:
- Moving-threshold curricula, where the cost/energy threshold is adaptively updated based on running-best, lower-bound proxies, and soft amortization windows (Patel et al., 2024, Kundu, 2024)
- Randomized episode halting, using negative-binomial sampling of episode lengths, which statistically biases search toward low-depth circuits while allowing multicircuit exploration (Patel et al., 2024, Kundu, 2024)
These strategies quantitatively reduce circuit depth and gate counts, promote rapid convergence to noise-resilient solutions, and prevent overfitting to unattainable objectives.
5. Empirical Results and Benchmarking
NA-QAS methods have been benchmarked on both quantum chemistry (VQE) and QML tasks, under simulated and real hardware noise. Key performance highlights include:
- "Curriculum reinforcement learning QAS" (CRLQAS) achieves chemical accuracy for VQE ( Ha) with circuits significantly shallower and fewer gates than RLQAS, qubit-ADAPT-VQE, and quantumDARTS (Patel et al., 2024)
- On IBMQ hardware, NA-QAS circuits consistently reach lower energy errors or higher test accuracies using 40–60% fewer CNOTs and 2–3× shorter depths than hardware-efficient ansätze and fixed-structure baselines (Patel et al., 2024, Du et al., 2020, Kundu, 2024, Li et al., 16 Jan 2026)
- On classification and multi-class Iris tasks, NA-QAS under bit-flip/depolarizing/thermal noise outperforms random/evolutionary search, matching or exceeding the best baselines with fewer two-qubit gates and less depth (Li et al., 16 Jan 2026)
- "QuantumSEA" halves quantum gate usage and execution time versus dense or previously known noise-adaptive methods, achieving 1–6% accuracy gains in QML and lower VQE energy estimation errors (Chen et al., 2024)
- NA-QAS’s continual RL variants demonstrate substantial speed-ups (2–4× faster convergence) and higher stability in adapting to new, more complex noise patterns over standard DQN (Ye et al., 2021)
6. Implementation Aspects and Practical Considerations
NA-QAS implementations leverage:
- Advanced quantum circuit and automatic differentiation frameworks (Qiskit, PennyLane, JAX with XLA for GPU-accelerated tensor operations) (Patel et al., 2024, Du et al., 2020, Kundu, 2024)
- Realistic noise profile imports from daily IBMQ calibrations
- Parameter-shift rule gradient estimation for non-differentiable hardware or classical noise models (Chen et al., 2024)
- Classical optimization via Adam, quantum natural gradient, or hybrid strategies across supernet ensembles (Du et al., 2020, Li et al., 16 Jan 2026)
Enforcement of hardware constraints (coherence windows, gate budgets) is handled by explicit sparsity, execution-time capping, and post-compilation checks (Chen et al., 2024). All key noise mitigation and circuit adaptation strategies have demonstrated practical transfer to real quantum devices.
7. Theoretical Guarantees and Regret Analysis
While the search problem is inherently combinatorial, some NA-QAS frameworks establish theoretical guarantees. For example, the supernet-based weight-sharing method achieves zero regret with respect to the best of W supernets per round, outperforming any adversarial bandit by a factor reflecting parallelization (regret ), albeit under the caveat of heuristic architecture sampling (Du et al., 2020). Pareto optimality is guaranteed in multi-objective evolutionary search due to the properties of the NSGA-II update (Li et al., 16 Jan 2026).
In summary, Noise-Aware Quantum Architecture Search (NA-QAS) constitutes a sophisticated suite of algorithms for automated PQC architecture discovery under realistic noise. Integrating tensorized circuit encodings, deep reinforcement learning, multi-objective evolutionary strategies, explicit noise modeling, and fast simulation paradigms, NA-QAS delivers both high-fidelity and resource-efficient circuits tailored to NISQ limitations. Empirical benchmarks confirm substantial accuracy, depth, and speed improvements over previous noise-agnostic and fixed-structure approaches (Du et al., 2020, Patel et al., 2024, Kundu, 2024, Li et al., 16 Jan 2026, Chen et al., 2024, Ye et al., 2021).