Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quantum Erasure Correction: Methods and Impact

Updated 16 January 2026
  • Quantum erasure correction is a set of protocols and techniques that protect quantum information from heralded erasure errors caused by photon loss, leakage, or qubit loss.
  • It employs both discrete-variable and continuous-variable codes alongside specialized decoding methods that convert syndrome decoding into tractable linear algebra problems.
  • Hardware-native approaches, such as superconducting dual-rail and cavity-QED implementations, leverage erasure conversion to achieve higher thresholds and lower logical error rates.

Quantum erasure correction is the set of protocols, code constructions, and decoding strategies designed specifically to protect quantum information against flagged erasure errors—faults for which the location (but not necessarily the operator type) of the error is known. Erasure errors arise naturally in photon loss, leakage out of defined Hilbert space, or catastrophic loss of physical qubits, and are fundamentally easier to correct than unflagged (Pauli) errors. Quantum erasure correction spans discrete-variable and continuous-variable encodings, incorporates hardware-native erasure-conversion primitives, and permits threshold gains, overhead reductions, and decoding efficiency unattainable in generic quantum error correction schemes. Modern quantum architectures are increasingly designed to induce erasure dominance and leverage erasure-aware codes for both near-term and scalable fault-tolerance.

1. Erasure Channels and Physical Error Models

Quantum erasure errors differ sharply from depolarizing or dephasing errors in that the loss event is both probabilistic and perfectly heralded. A canonical single-qubit erasure channel acts as

Ep(ρ)=(1p)ρ+pee,\mathcal{E}_p(\rho) = (1 - p)\rho + p\,|e\rangle\langle e|,

where e|e\rangle is the erasure flag state, orthogonal to the computational subspace (Violaris et al., 5 Jan 2026). Typical physical mechanisms include photon loss in optical fibers or cavities, amplitude-damping in superconducting or atomic qubits, and leakage to higher-lying levels in multilevel systems. Erasure conversion protocols translate generic leakage or loss channels to erasure channels via mid-circuit detection—such as fluorescence measurements in trapped ions, shelving in neutral atoms, or ancilla-aided syndrome extraction in superconducting qubits—with efficiencies up to 98% (&&&1&&&, Pecorari et al., 27 Feb 2025, Zhang et al., 16 Jun 2025, Koottandavida et al., 2023).

The distinction between erasures and Pauli errors is fundamental: erasures are correctable up to the code distance d1d-1 rather than (d1)/2\lfloor(d-1)/2\rfloor, and knowledge of the erased location transforms syndrome decoding from NP-hard combinatorial matching to a tractable linear-algebra problem (Kuo et al., 2024).

2. Code Constructions for Erasure Correction

Erasure-correcting quantum codes exist for both stabilizer (qubit) and non-stabilizer (qudit, CV) systems. Notable families include:

  • Surface codes and Floquet codes: Topological codes attain erasure thresholds near the erasure-channel capacity p=0.5p^* = 0.5 at vanishing rate, saturating C(p)=12pC(p) = 1 - 2p (Gu et al., 2023, Kuo et al., 2024).
  • Quantum LDPC codes: High-rate families (Clifford-deformed La-cross, Bicycle, Lifted-product) exhibit thresholds at p0.4p^* \approx 0.4 (rate 0.04\approx 0.04), p0.285p^* \approx 0.285 (rate $1/4$), and p0.092p^* \approx 0.092 (rate $3/4$) (Kuo et al., 2024, Pecorari et al., 27 Feb 2025).
  • Quantum polynomial codes: QPyC (2k+1,1,k+1)(2k+1,1,k+1) over dimension d2k+1d\geq 2k+1 recover from up to kk erasures and reach the 50%50\% threshold at large code size (Muralidharan et al., 2015).
  • Continuous-variable codes: CV erasure codes use Gaussian or non-Gaussian entanglement distributed over multiple optical modes; e.g., Lassen et al. demonstrated a four-mode code capable of restoring quantum coherence after photon losses with fidelity F0.57F \simeq 0.57 deterministically and F0.82F \simeq 0.82 probabilistically (Lassen et al., 2010). Similarly, three-mode CV codes employing squeezed Bell resources can achieve >0.5>0.5 fidelity even as erasure probabilities exceed $0.5$ (Villasenor et al., 2022).

Hardware-native code designs, notably dual-rail qubits in superconducting or cavity-QED devices, are explicitly engineered so that amplitude-damping or photon loss acts as erasure, enabling direct integration into stabilizer or LDPC architectures (Levine et al., 2023, Koottandavida et al., 2023, Violaris et al., 5 Jan 2026).

3. Decoding Protocols and Measurement Reduction

Erasure correction exploits the knowledge of erased positions to allow maximum-likelihood decoding via linear algebra or belief propagation. For stabilizer codes, the coset-based decoding problem,

HET=sT,Ej=0 for jr,H E^T = s^T, \quad E_j = 0 \text{ for } j \notin r,

admits all feasible solutions as maximum-likelihood decoders, with degeneracy handled by logical coset tracking (Kuo et al., 2024). For topological and LDPC codes, linear-time BP decoders exploiting error degeneracy achieve capacity or near-capacity thresholds and logical error rates matching false-convergence rates down to 10610^{-6} (Kuo et al., 2024, Pecorari et al., 27 Feb 2025).

Measurement minimizing constructions leverage quantum local recovery: correction of δ\delta erasures in a stabilizer code requires measuring only those stabilizers with support on the erased positions. For generalized surface codes, this yields at most 2δ2\delta vertex and face measurements, further reducible by basis selection to δ\delta each (Matsumoto, 27 Oct 2025). This cut in measurement and qudit involvement is orders-of-magnitude for codes with nδn\gg \delta.

In continuous-variable implementations, joint homodyne measurements across auxiliary modes and feedforward displacement suffice for deterministic erasure recovery, while postselection can enhance the correction fidelity, albeit at reduced success rates (Lassen et al., 2010, Villasenor et al., 2022).

4. Thresholds, Logical Error Suppression, and Performance

Erasure codes achieve substantially elevated thresholds compared to their Pauli-error counterparts:

  • Surface code: p5.6%p^* \approx 5.6\% for pure erasure (vs. 1%\sim1\% for Pauli), and up to $0.23$ for full circuit-noise models with erasure-bias (Violaris et al., 5 Jan 2026, Pecorari et al., 27 Feb 2025).
  • LDPC codes: La-cross codes reach p0.4p^* \approx 0.4, outperforming standard surface codes at equal N,KN, K, and achieving order-of-magnitude reductions in logical error rates at subthreshold (Pecorari et al., 27 Feb 2025).
  • Topological codes: Toric and XZZX codes empirically saturate the erasure capacity threshold p0.5p^* \approx 0.5 at zero rate (Kuo et al., 2024).

The scaling law for logical error rate is

PL(pe)(pe)d+(p)d/2P_L(p_e) \propto (p_e)^{d} + (p)^{\lceil d/2 \rceil}

for erasure rate pep_e and residual Pauli pp, with exponential suppression in dd for pure erasure noise (Violaris et al., 5 Jan 2026, Kang et al., 2022).

Trade-offs between cost and performance are explicit in hybrid-erasure architectures: strategic placement of erasure qubits in central rows/columns of a surface code patch can boost effective code distance and threshold by up to 50%, at only 1.5–2× hardware cost compared to full erasure deployment (Chadwick et al., 30 Apr 2025).

5. Hardware Demonstrators and Implementation Strategies

Recent advances enable direct realization of erasure codes:

  • Superconducting dual-rail qubits: Resonant transmon pairs with active erasure check and reset yield T2T_2 up to ms, erasure detection fidelity 99%\sim99\%, and error-bias ratios >40>40; mid-circuit checks contribute <0.1%<0.1\% dephasing (Levine et al., 2023, Koottandavida et al., 2023).
  • Cavity-QED: Double-post 3D cavity encoding yields erasure rates 3.983.98\,ms1^{-1} and residual dephasing 0.170.17\,ms1^{-1}, enabling robust quantum error correction with improved scaling (Koottandavida et al., 2023).
  • Neutral atoms/metastable ions: Shelving in long-lived manifolds (171^{171}Yb, 40^{40}Ca+^+) and fluorescence detection convert spontaneous losses to erasure events detected with >99% fidelity, allowing implementation of [[4,2,2]] codes, logical teleportation, and fault-tolerant protocols with threshold p3%p^* \sim 3\% (Zhang et al., 16 Jun 2025, Kang et al., 2022).
  • Spin qubit architectures: Singlet-triplet encoding enables hardware-efficient leakage detection, pushing XZZX surface-code threshold to p1.3%p^* \approx 1.3\% and yielding several orders-of-magnitude reduction in logical error rate without measurement feedback loops (Siegel et al., 15 Jan 2026).

Dynamical error reshaping techniques further suppress erasure-check errors and gate infidelity via pulse shaping and DCGs, achieving error rates at the 10610^{-6}10510^{-5} level (Dakis et al., 9 Oct 2025).

6. Distributed Quantum Storage and Entanglement Cost

Distributed quantum erasure correction involves minimizing resource costs for restoring lost nodes in quantum networks. For MDS codes, the entanglement cost in star networks is tightly bound at $2t$ qudits (replacement-hub) and $2t-1$ (helper-hub), with optimal protocols given by download-and-return circuits (Senthoor, 26 May 2025). Extensions to tree networks, regenerative codes, and approximate protocols are formulated via LOCC monotonicity and Schmidt-rank analyses, but trade-off curves with more than minimal helpers remain open.

7. Continuous-Variable Erasure Correction

CV erasure-correcting codes combine linear optics, Gaussian or non-Gaussian entanglement, and multi-mode redundancy. The four-mode code (Lassen et al., 2010) and three-mode code (Villasenor et al., 2022) experimentally demonstrate restoration of coherence and transmission fidelity well above the classical bound (Fcl=0.5F_\text{cl} = 0.5), with deterministic schemes yielding F0.57F \approx 0.57 and postselective schemes F0.82F \approx 0.82 for representative squeezing parameters. Feedforward displacement based on multi-mode homodyne measurements and optimized gain enable near-perfect recovery of erased signals under experimentally realistic photon-loss probabilities.

8. Future Directions and Open Problems

Key directions include:

  • Optimization of erasure-check measurement protocol cadence and reset schemes for further threshold enhancement (Gu et al., 2024).
  • Design of erasure-specific decoders (BP+OSD, AMBP4) for very large QLDPC codes and concatenated architectures (Kuo et al., 2024, Pecorari et al., 27 Feb 2025).
  • Hardware-aware scheduling under parallelism and device constraints.
  • Characterization of entanglement cost trade-offs in distributed erasure correction beyond star topologies (Senthoor, 26 May 2025).
  • Extension of code-deformation k-shift protocols for managing dense, highly-erased patches in neutral atom arrays (Kobayashi et al., 2024).
  • Integration of permutation-invariant inner codes for blockwise loss models (Kuo et al., 2024).

Quantum erasure correction thus represents a convergent direction for scalable quantum fault-tolerance, combining hardware-driven error bias, efficient code design, optimized decoding, and reduced resource overhead with explicit codes and protocols that saturate channel capacities and outperform traditional approaches in a range of platforms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Quantum Erasure Correction.