Quantum Erasure Correction: Methods and Impact
- Quantum erasure correction is a set of protocols and techniques that protect quantum information from heralded erasure errors caused by photon loss, leakage, or qubit loss.
- It employs both discrete-variable and continuous-variable codes alongside specialized decoding methods that convert syndrome decoding into tractable linear algebra problems.
- Hardware-native approaches, such as superconducting dual-rail and cavity-QED implementations, leverage erasure conversion to achieve higher thresholds and lower logical error rates.
Quantum erasure correction is the set of protocols, code constructions, and decoding strategies designed specifically to protect quantum information against flagged erasure errors—faults for which the location (but not necessarily the operator type) of the error is known. Erasure errors arise naturally in photon loss, leakage out of defined Hilbert space, or catastrophic loss of physical qubits, and are fundamentally easier to correct than unflagged (Pauli) errors. Quantum erasure correction spans discrete-variable and continuous-variable encodings, incorporates hardware-native erasure-conversion primitives, and permits threshold gains, overhead reductions, and decoding efficiency unattainable in generic quantum error correction schemes. Modern quantum architectures are increasingly designed to induce erasure dominance and leverage erasure-aware codes for both near-term and scalable fault-tolerance.
1. Erasure Channels and Physical Error Models
Quantum erasure errors differ sharply from depolarizing or dephasing errors in that the loss event is both probabilistic and perfectly heralded. A canonical single-qubit erasure channel acts as
where is the erasure flag state, orthogonal to the computational subspace (Violaris et al., 5 Jan 2026). Typical physical mechanisms include photon loss in optical fibers or cavities, amplitude-damping in superconducting or atomic qubits, and leakage to higher-lying levels in multilevel systems. Erasure conversion protocols translate generic leakage or loss channels to erasure channels via mid-circuit detection—such as fluorescence measurements in trapped ions, shelving in neutral atoms, or ancilla-aided syndrome extraction in superconducting qubits—with efficiencies up to 98% (&&&1&&&, Pecorari et al., 27 Feb 2025, Zhang et al., 16 Jun 2025, Koottandavida et al., 2023).
The distinction between erasures and Pauli errors is fundamental: erasures are correctable up to the code distance rather than , and knowledge of the erased location transforms syndrome decoding from NP-hard combinatorial matching to a tractable linear-algebra problem (Kuo et al., 2024).
2. Code Constructions for Erasure Correction
Erasure-correcting quantum codes exist for both stabilizer (qubit) and non-stabilizer (qudit, CV) systems. Notable families include:
- Surface codes and Floquet codes: Topological codes attain erasure thresholds near the erasure-channel capacity at vanishing rate, saturating (Gu et al., 2023, Kuo et al., 2024).
- Quantum LDPC codes: High-rate families (Clifford-deformed La-cross, Bicycle, Lifted-product) exhibit thresholds at (rate ), (rate $1/4$), and (rate $3/4$) (Kuo et al., 2024, Pecorari et al., 27 Feb 2025).
- Quantum polynomial codes: QPyC over dimension recover from up to erasures and reach the threshold at large code size (Muralidharan et al., 2015).
- Continuous-variable codes: CV erasure codes use Gaussian or non-Gaussian entanglement distributed over multiple optical modes; e.g., Lassen et al. demonstrated a four-mode code capable of restoring quantum coherence after photon losses with fidelity deterministically and probabilistically (Lassen et al., 2010). Similarly, three-mode CV codes employing squeezed Bell resources can achieve fidelity even as erasure probabilities exceed $0.5$ (Villasenor et al., 2022).
Hardware-native code designs, notably dual-rail qubits in superconducting or cavity-QED devices, are explicitly engineered so that amplitude-damping or photon loss acts as erasure, enabling direct integration into stabilizer or LDPC architectures (Levine et al., 2023, Koottandavida et al., 2023, Violaris et al., 5 Jan 2026).
3. Decoding Protocols and Measurement Reduction
Erasure correction exploits the knowledge of erased positions to allow maximum-likelihood decoding via linear algebra or belief propagation. For stabilizer codes, the coset-based decoding problem,
admits all feasible solutions as maximum-likelihood decoders, with degeneracy handled by logical coset tracking (Kuo et al., 2024). For topological and LDPC codes, linear-time BP decoders exploiting error degeneracy achieve capacity or near-capacity thresholds and logical error rates matching false-convergence rates down to (Kuo et al., 2024, Pecorari et al., 27 Feb 2025).
Measurement minimizing constructions leverage quantum local recovery: correction of erasures in a stabilizer code requires measuring only those stabilizers with support on the erased positions. For generalized surface codes, this yields at most vertex and face measurements, further reducible by basis selection to each (Matsumoto, 27 Oct 2025). This cut in measurement and qudit involvement is orders-of-magnitude for codes with .
In continuous-variable implementations, joint homodyne measurements across auxiliary modes and feedforward displacement suffice for deterministic erasure recovery, while postselection can enhance the correction fidelity, albeit at reduced success rates (Lassen et al., 2010, Villasenor et al., 2022).
4. Thresholds, Logical Error Suppression, and Performance
Erasure codes achieve substantially elevated thresholds compared to their Pauli-error counterparts:
- Surface code: for pure erasure (vs. for Pauli), and up to $0.23$ for full circuit-noise models with erasure-bias (Violaris et al., 5 Jan 2026, Pecorari et al., 27 Feb 2025).
- LDPC codes: La-cross codes reach , outperforming standard surface codes at equal , and achieving order-of-magnitude reductions in logical error rates at subthreshold (Pecorari et al., 27 Feb 2025).
- Topological codes: Toric and XZZX codes empirically saturate the erasure capacity threshold at zero rate (Kuo et al., 2024).
The scaling law for logical error rate is
for erasure rate and residual Pauli , with exponential suppression in for pure erasure noise (Violaris et al., 5 Jan 2026, Kang et al., 2022).
Trade-offs between cost and performance are explicit in hybrid-erasure architectures: strategic placement of erasure qubits in central rows/columns of a surface code patch can boost effective code distance and threshold by up to 50%, at only 1.5–2× hardware cost compared to full erasure deployment (Chadwick et al., 30 Apr 2025).
5. Hardware Demonstrators and Implementation Strategies
Recent advances enable direct realization of erasure codes:
- Superconducting dual-rail qubits: Resonant transmon pairs with active erasure check and reset yield up to ms, erasure detection fidelity , and error-bias ratios ; mid-circuit checks contribute dephasing (Levine et al., 2023, Koottandavida et al., 2023).
- Cavity-QED: Double-post 3D cavity encoding yields erasure rates ms and residual dephasing ms, enabling robust quantum error correction with improved scaling (Koottandavida et al., 2023).
- Neutral atoms/metastable ions: Shelving in long-lived manifolds (Yb, Ca) and fluorescence detection convert spontaneous losses to erasure events detected with >99% fidelity, allowing implementation of [[4,2,2]] codes, logical teleportation, and fault-tolerant protocols with threshold (Zhang et al., 16 Jun 2025, Kang et al., 2022).
- Spin qubit architectures: Singlet-triplet encoding enables hardware-efficient leakage detection, pushing XZZX surface-code threshold to and yielding several orders-of-magnitude reduction in logical error rate without measurement feedback loops (Siegel et al., 15 Jan 2026).
Dynamical error reshaping techniques further suppress erasure-check errors and gate infidelity via pulse shaping and DCGs, achieving error rates at the – level (Dakis et al., 9 Oct 2025).
6. Distributed Quantum Storage and Entanglement Cost
Distributed quantum erasure correction involves minimizing resource costs for restoring lost nodes in quantum networks. For MDS codes, the entanglement cost in star networks is tightly bound at $2t$ qudits (replacement-hub) and $2t-1$ (helper-hub), with optimal protocols given by download-and-return circuits (Senthoor, 26 May 2025). Extensions to tree networks, regenerative codes, and approximate protocols are formulated via LOCC monotonicity and Schmidt-rank analyses, but trade-off curves with more than minimal helpers remain open.
7. Continuous-Variable Erasure Correction
CV erasure-correcting codes combine linear optics, Gaussian or non-Gaussian entanglement, and multi-mode redundancy. The four-mode code (Lassen et al., 2010) and three-mode code (Villasenor et al., 2022) experimentally demonstrate restoration of coherence and transmission fidelity well above the classical bound (), with deterministic schemes yielding and postselective schemes for representative squeezing parameters. Feedforward displacement based on multi-mode homodyne measurements and optimized gain enable near-perfect recovery of erased signals under experimentally realistic photon-loss probabilities.
8. Future Directions and Open Problems
Key directions include:
- Optimization of erasure-check measurement protocol cadence and reset schemes for further threshold enhancement (Gu et al., 2024).
- Design of erasure-specific decoders (BP+OSD, AMBP4) for very large QLDPC codes and concatenated architectures (Kuo et al., 2024, Pecorari et al., 27 Feb 2025).
- Hardware-aware scheduling under parallelism and device constraints.
- Characterization of entanglement cost trade-offs in distributed erasure correction beyond star topologies (Senthoor, 26 May 2025).
- Extension of code-deformation k-shift protocols for managing dense, highly-erased patches in neutral atom arrays (Kobayashi et al., 2024).
- Integration of permutation-invariant inner codes for blockwise loss models (Kuo et al., 2024).
Quantum erasure correction thus represents a convergent direction for scalable quantum fault-tolerance, combining hardware-driven error bias, efficient code design, optimized decoding, and reduced resource overhead with explicit codes and protocols that saturate channel capacities and outperform traditional approaches in a range of platforms.