Analog Quantum Error Mitigation
- Analog Quantum Error Mitigation is a set of techniques that correct continuous errors in Hamiltonian-driven systems using methods like zero-noise extrapolation and analog maximum-likelihood decoding.
- It employs protocols such as replication-based mitigation, dual-state purification, and probabilistic error cancellation to improve simulation fidelity on NISQ devices like superconducting circuits and trapped ions.
- Protocols are perturbative and scalable within NISQ limits, demonstrating practical gains like a 90% reduction in VQE energy error and extended qubit coherence times.
Analog quantum error mitigation (AQEM) comprises a suite of techniques developed to suppress or compensate for errors in quantum devices that implement computations or simulations via continuous, Hamiltonian-driven dynamics, rather than gate-based, digital circuits. Unlike fault-tolerant quantum error correction, which requires large qubit overhead and real-time feedback, AQEM operates within the constraints of noisy intermediate-scale quantum (NISQ) platforms—superconducting circuits, trapped ions, Rydberg arrays, and quantum annealers—where both coherent and incoherent errors limit simulability and computational accuracy. AQEM leverages the continuous and analog nature of errors, employing strategies such as probabilistic cancellation, zero-noise extrapolation, Hamiltonian reshaping/rescaling, and analog maximum-likelihood decoding to recover idealizable quantum information without additional physical redundancy.
1. Continuous-Time Error Models and Analog Outcomes
AQEM addresses error processes that emerge during continuous-time unitary evolution or open-system dynamics, characterized by time-dependent master equations involving both engineered and error Hamiltonians, possibly combined with Lindblad dissipators. Central to many AQEM protocols is the identification and exploitation of the analog outcome—real-valued measurement results or continuously parameterizable error strengths—rather than restricting corrections to discrete syndrome bits.
A representative case is the GKP (Gottesman-Kitaev-Preskill) encoding, where qubits are encoded in oscillator modes subject to small continuous displacements (Gaussian noise). Measurement outcomes yield real values , from which both digital (syndrome bit) and analog (deviation ) information are extracted. Conventional digital error correction discards , but analog schemes utilize it for maximum-likelihood decoding, correcting error patterns (such as double errors in repetition codes) strictly inaccessible to digital majority-vote decoders (Fukui et al., 2017).
For more general analog simulators, the noise is modeled as random, static or slowly varying parameter perturbations to the system Hamiltonian. Typical forms include
with the drawn independently from distributions (e.g., Gaussian or thermal), representing shot-to-shot or device-specific fluctuations (Cai et al., 2023, Steckmann et al., 19 Jun 2025).
2. Extrapolation-Based Mitigation: Zero-Noise and Parameter Amplification
Zero-noise extrapolation (ZNE) is a cornerstone of AQEM. The method exploits the ability to controllably amplify noise—by slowing down evolution, increasing relaxation rates, varying temperature, or amplifying Hamiltonian fluctuations—and measures the observable of interest at several noise strengths. The results are fit to a polynomial or exponential ansatz, and the zero-noise estimate is extracted as the extrapolated intercept (Ma et al., 9 Apr 2025, Amin et al., 2023, Steckmann et al., 19 Jun 2025, García-Molina et al., 2021).
A prototypical ZNE protocol comprises:
- Identification of a hardware parameter controlling error strength (e.g., decoherence rate, amplitude noise, or temperature).
- Execution of the analog evolution at several , collecting corresponding measurement outcomes .
- Polynomial (or suitable functional) fit to , with the zero-noise limit taken as .
- Rigorous error bounds via perturbative expansions ensure that for sufficiently small noise and proper choice of functional form, bias can be systematically reduced (Steckmann et al., 19 Jun 2025, Sun et al., 2020).
Empirically, ZNE has enabled, for example, a 90% reduction in VQE energy error for a 4-spin Ising model on superconducting hardware (Ma et al., 9 Apr 2025), extension of two-qubit oscillation lifetimes by a factor of three in trapped ions (Steckmann et al., 19 Jun 2025), and recovery of critical Kibble–Zurek scaling in D-Wave quantum annealers well beyond the raw coherence time (Amin et al., 2023).
3. Structural and Algorithmic Error Mitigation: Replication and Dual-State Purification
Certain hardware and computational models admit specialized analog error mitigation by structurally manipulating the embedding or the computation protocol.
Replication-Based Mitigation (RBM): In quantum annealing platforms, systematic hardware biases can be mitigated by embedding multiple independent 'replicas' of the same logical problem into disjoint hardware regions. By post-selecting the minimum energy outcome across replicas, the upward energy bias (from random static offsets) is reduced , where is the number of replicas and the noise scale per replica (Djidjev, 2024).
Dual-State Purification: For Markovian decoherence in QA, a protocol consisting of a forward noisy evolution, intermediate projective measurement, and backward inverse annealing reconstructs a virtually purified state. Expectation values of observables are then estimated by classical post-processing of overlaps between final and dual states, suppressing decoherence-induced leakage without requiring ancillary qubits, at the cost of doubling the circuit depth (Shingu et al., 2022).
These strategies are resource efficient and compatible with NISQ devices, providing significant error suppression in practice.
4. Analog Error Mitigation Protocols: Hamiltonian Reshaping and Rescaling
Two advanced AQEM approaches—Hamiltonian reshaping and Hamiltonian rescaling—exploit the spectrum-preserving properties of unitary conjugation and controllable amplitude scaling:
- Hamiltonian Reshaping: Multiple analog simulations are performed with the target Hamiltonian conjugated by randomly selected unitaries (e.g., elements of the Pauli group). Averaging over these instances cancels first-order noise shifts in the energy spectrum, reducing mean relative errors from to for noise strength , due to self-averaging over the transformation ensemble (Guo et al., 2024).
- Hamiltonian Rescaling: Simulations are run with Hamiltonian scaled by different amplitudes and corresponding rescaled evolution time. Through first- or second-order Richardson extrapolation, both first- and second-order noise-induced errors can be canceled. Combining reshaping and rescaling compounds their mitigation effect (Guo et al., 2024).
Both methods target tasks such as eigenenergy estimation and are validated numerically on many-body lattice models. Extension to physical devices requires precise control over Hamiltonian synthesis and the ability to implement a subset of conjugating unitaries.
5. Probabilistic Error Cancellation and Stochastic Techniques
Stochastic error mitigation exploits the statistical independence of random fluctuations across circuit components or hardware sites. In analog quantum simulation, local static perturbations that are zero mean and uncorrelated lead, by central limit arguments, to a suppression of expectation value errors: the typical error scales as (where is the system size, the local noise, and the evolution time) versus the scaling for adversarial noise (Cai et al., 2023). This stochastic cancellation enables a higher noise tolerance, extending reliable simulation time and system size.
Analog quasi-probability error mitigation protocols extend digital gate-based error cancellation to the continuous-time setting. By stochastically applying single-qubit channels during evolution, one can realize a recovery super-operator that cancels the effect of known Markovian noise in expectation, with overhead exponential in the total noise–time product (Sun et al., 2020). Residual bias can be further removed by integrating with ZNE.
6. Analog Quantum Error Correction with Hybrid Maximum-Likelihood Decoding
A distinct, information-theoretic approach involves hybrid use of analog and digital error information in bosonic encodings such as the GKP code. Given the analog deviation from comb-peak measurements, error correction is reformulated as a joint maximum-likelihood (ML) decoding: for a codeword block (e.g., the three-qubit bit-flip code or concatenated C4/C6 code), all possible error patterns are scored by the product of their associated Gaussian likelihoods, and the most probable pattern is selected.
Performance enhancements are significant:
- Double errors can be corrected in a three-qubit code when analog ML is used, while digital decoding can only correct single errors.
- In concatenated coding, analog ML decoding on GKP qubits achieves the hashing bound for the Gaussian quantum channel (maximum possible for any code), superior to digital-only schemes (Fukui et al., 2017).
7. Scalability, Limitations, and Prospects
AQEM methods are inherently scalable within NISQ limitations:
- Sampling overhead is polynomial or subexponential in system size for stochastic or extrapolation-based protocols, provided per-qubit error rates and control are sufficiently low (Steckmann et al., 19 Jun 2025, Sun et al., 2020).
- Protocols such as RBM, ZNE, and analog-block symmetrization can be implemented on hardware lacking fast feedback or syndrome extraction, requiring only single-qubit control, global parameter tuning, or readily programmable Hamiltonians (Djidjev, 2024, Garcia-de-Andoin et al., 6 May 2025, Ma et al., 9 Apr 2025, García-Molina et al., 2021).
However, AQEM is constrained by several factors:
- All extrapolation protocols are perturbative, assuming errors are small and analytic in their control parameters.
- ZNE and reshaping/rescaling rely on the ability to perform repeated, independent runs at variable error amplitudes, which may be coarse grained or unavailable in practice.
- Sampling noise, instability of high-order polynomial fits, and non-Markovianity can all degrade mitigation efficacy (Steckmann et al., 19 Jun 2025).
Ongoing research aims to integrate AQEM with small-scale error-correcting codes, multi-modal extrapolations, and hybrid digital-analog protocols to further suppress errors and to extend their reach in larger-scale, more complex analog quantum platforms.
Key References
- Analog quantum error correction with encoding a qubit into an oscillator (Fukui et al., 2017)
- Stochastic Error Cancellation in Analog Quantum Simulation (Cai et al., 2023)
- Experimental Implementation of a Qubit-Efficient Variational Quantum Eigensolver with Analog Error Mitigation on a Superconducting Quantum Processor (Ma et al., 9 Apr 2025)
- Quantum annealing with error mitigation (Shingu et al., 2022)
- Replication-based quantum annealing error mitigation (Djidjev, 2024)
- Mitigating noise in digital and digital-analog quantum computation (García-Molina et al., 2021)
- Impact and mitigation of Hamiltonian characterization errors in digital-analog quantum computation (Garcia-de-Andoin et al., 6 May 2025)
- Mitigating Errors in Analog Quantum Simulation by Hamiltonian Reshaping or Hamiltonian Rescaling (Guo et al., 2024)
- Quantum error mitigation in quantum annealing (Amin et al., 2023)
- Mitigating realistic noise in practical noisy intermediate-scale quantum devices (Sun et al., 2020)
- Error mitigation of shot-to-shot fluctuations in analog quantum simulators (Steckmann et al., 19 Jun 2025)