Papers
Topics
Authors
Recent
Search
2000 character limit reached

Polyestimate: Fast Logical Error Estimation

Updated 24 January 2026
  • Polyestimate is an open-source software tool that quickly estimates logical error rates in surface code quantum error correction systems using a precomputed simulation database.
  • It uses interpolation for small code distances and exponential extrapolation for larger distances, reducing computation times from hours to fractions of a second.
  • The tool supports both detailed stochastic and depolarizing error models, enabling hardware designers to effectively assess fault-tolerance feasibility in quantum systems.

Polyestimate is an open-source software tool designed to provide rapid, user-friendly estimates of logical error rates in the surface code, a leading quantum error-correcting code architecture. Unlike full-scale “Autotune” simulations that incorporate exhaustive details of hardware noise models at considerable computational expense, Polyestimate functions as a fast, lightweight alternative aimed at quantum hardware designers and theorists requiring immediate feedback on whether specific physical gate error rates and code distances will meet target logical error thresholds. By leveraging a precomputed database of Autotune results for key code distances and a principled interpolation and extrapolation scheme, Polyestimate circumvents the need for in-depth expertise in surface code protocols, providing reliable error estimates in a fraction of a second (Fowler, 2013).

1. Purpose and Scope

Polyestimate targets the practical challenge of mapping physical gate-level error rates—typically characterized by quantum hardware designers—onto logical error rates for surface codes of arbitrary distance, crucial in determining the feasibility of fault-tolerant quantum computation. The surface code’s practicality stems from its reliance only on a two-dimensional nearest-neighbor array of qubits, requiring gate error rates below roughly 1%. However, detailed Autotune simulations for evaluating logical error rates can demand hours or days per data point, making rapid design iteration impractical. Polyestimate addresses this by enabling essentially instantaneous logical error rate estimation without requiring surface code expertise or in-depth simulation, focusing on logical XX and ZZ failure probabilities for any code distance dd in under a second. It is principally intended for rapid, iterative exploration by quantum hardware development teams and theorists comparing code and hardware parameters (Fowler, 2013).

2. Supported Input Error Models

Polyestimate is constructed around the standard surface-code syndrome-measurement circuits, encompassing initialization (Init), Pauli-ZZ measurement (MZM_Z), Hadamard (HH), controlled-NOT (CNOT), and identity gates (with durations matched to the nontrivial gates). Input error models can be provided at varying levels of detail:

  • Detailed stochastic error models: The user can specify an explicit error probability for each outcome of each of eight gate types. For example, each CNOT can have a complete 15-parameter error specification, listing pIXp_{IX}, pIYp_{IY}, …, pZZp_{ZZ}.
  • Depolarizing error rates: Alternatively, a single depolarizing error probability per gate can be provided, with designated rates for each gate and matching identity gate durations: pInitp_\text{Init}, pMeasp_\text{Meas}, pHp_H, pCNOTp_\text{CNOT}, pIdInitp_\text{IdInit}, pIdHp_\text{IdH}, pIdMeasp_\text{IdMeas}, and pIdCNOTp_\text{IdCNOT}.

Irrespective of input specificity, Polyestimate reduces the full gate-level descriptions to three effective parameters per error type A{X,Z}A\in\{X,Z\}:

  • p0Ap_{0A}: Syndrome-qubit error per round,
  • p1Ap_{1A}: Data-qubit identity error per round,
  • p2Ap_{2A}: Two-qubit depolarizing error.

Single-qubit gate error contributions are mapped onto independent XX and ZZ error channels by pX=pX+pYp'_X = p_X + p_Y and pZ=pY+pZp'_Z = p_Y + p_Z.

3. Logical Error Rate Estimation Framework

Logical error rates in the surface code decrease exponentially with increasing code distance dd when physical gate error rates remain below the threshold (1\approx 11.25%1.25\%). Polyestimate’s methodology is as follows:

  • For small dd (3d63 \leq d \leq 6), values of pAL(d)p_{A_L}(d) for A=X,ZA=X,Z are obtained by direct interpolation from the database over a grid in (r0,r1,p2)(r_0, r_1, p_2), where r0A=p0A/p2A[0.01,200]r_{0A}=p_{0A}/p_{2A}\in[0.01, 200] and r1A=p1A/p2A[0.01,1]r_{1A}=p_{1A}/p_{2A}\in[0.01, 1], parameterizing a wide array of physical scenarios.
  • For d>6d>6, the logical error rate is extrapolated exponentially:

    • For odd dd:

    pAL(d)Cx(d+1)/2p_{A_L}(d) \approx C\, x^{\lfloor(d+1)/2\rfloor}

    with x=pAL(5)/pAL(3)x = p_{A_L}(5)/p_{A_L}(3) and C=pAL(3)/x2C = p_{A_L}(3)/x^2. - For even dd:

    pAL(d)Dy(d+1)/2p_{A_L}(d) \approx D\, y^{\lfloor(d+1)/2\rfloor}

    with y=pAL(6)/pAL(4)y = p_{A_L}(6)/p_{A_L}(4) and D=pAL(4)/y2D = p_{A_L}(4)/y^2.

This approach obviates the need to recompute weight coefficients cwc_w from the conventional expression PL(d,p)=wcw(d)pw(1p)dwP_L(d, p) = \sum_w c_w(d) p^w (1-p)^{d-w} for each new hardware scenario, dramatically simplifying the estimation process (Fowler, 2013).

4. Implementation and Computational Characteristics

Polyestimate’s implementation consists of a lightweight C++ core with an optional Python interface. Upon initialization, it loads a compact database containing precomputed Autotune simulation results spanning relevant error parameter regimes and code distances (d=3,4,5,6d=3,4,5,6). The core estimation algorithms execute:

  • Reduction of arbitrary gate errors to the effective (p0X,p1X,p2X)(p_{0X},p_{1X},p_{2X}) and (p0Z,p1Z,p2Z)(p_{0Z},p_{1Z},p_{2Z}) descriptors.
  • Linear interpolation in three-dimensional (r0,r1,p2)(r_0, r_1, p_2) space for database-backed code distances.
  • Exponential extrapolation as described above for any d>6d>6.

Performance benchmarks indicate that a typical estimation query—encompassing database load, parameter reduction, and logical error lookup or extrapolation—executes in approximately 10210^{-2} seconds on a conventional laptop. For comparison, a single Autotune simulation at d=8d=8 can require hundreds of CPU-hours, demonstrating the significant computational advantage of Polyestimate for quick design exploration. The tool is engineered for both command-line and API-driven workflows, with dedicated constructs for specifying depolarizing rates and obtaining logical XX and ZZ error rates for arbitrary code distances.

Feature Polyestimate Full Autotune Simulation
Underlying Method Interpolation + Exponential Full circuit-level simulation
Supported Distances (fast) d=3d=3–$6$ (direct); d>6d>6 (extrapolated) Arbitrary (with time cost)
Time per Query 102  s\sim 10^{-2}\;\text{s} Hours to days

5. Example Workflows

Users may interact with Polyestimate through straightforward interfaces:

  • Command-line usage:

1
2
3
4
5
polyestimate \
  --p_init=1e-3 --p_meas=1e-3 \
  --p_h=1e-3 --p_cnot=1e-3 \
  --p_idleInit=1e-3 --p_idleH=1e-3 --p_idleMeas=1e-3 \
  --distance=7
This workflow produces estimates for logical error rates at the specified code distance, given gate-wise depolarizing rates.

  • C++-style API usage:

1
2
3
4
5
6
PolyEstimator est(
  p_init, p_meas, p_H, p_CNOT,
  p_IdInit, p_IdH, p_IdMeas);
est.setDistance(9);
double pX_logical = est.estimateLogicalX();
double pZ_logical = est.estimateLogicalZ();

These interfaces are designed such that users need not be experts in surface code elaborations, facilitating rapid evaluation in hardware optimization cycles.

6. Limitations and Underlying Assumptions

Polyestimate presumes two-dimensional nearest-neighbor qubit connectivity and employs minimum-weight perfect matching for error correction, treating XX and ZZ error chains independently and neglecting YY-correlated errors. This simplification is generally appropriate for standard hardware noise profiles but may introduce inaccuracies when error correlations are significant or when input CNOT error models are highly asymmetric (e.g., pIXpXI,pXXp'_{IX} \gg p'_{XI}, p'_{XX}). In such cases, Polyestimate’s forced balancing of error rates can overestimate logical errors, and full Autotune simulations or analytic asymptotic methods may be required.

Additionally, the precomputed Polyestimate database covers two-qubit depolarizing error rates p2Ap_{2A} in the range 10410^{-4} to 2×1022\times 10^{-2}; estimate fidelity outside this region would necessitate supplementing the database with further simulations. In practice, Polyestimate achieves logical error rate estimates within 1015%10–15\% of full Autotune results across diverse error profiles, including instances with 10%10\% measurement error rates. This trade-off between speed and accuracy positions Polyestimate primarily as a tool for early-stage quantum hardware assessment and iterative architectural design (Fowler, 2013).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Polyestimate.