Papers
Topics
Authors
Recent
2000 character limit reached

Evolutionary Belief Propagation Decoder

Updated 27 December 2025
  • Evolutionary Belief Propagation (EBP) decoder is a quantum error correction framework that extends traditional BP with trainable, edge-specific weights optimized via differential evolution.
  • It integrates a hybrid EBP+OSD approach that combines low-latency, fixed iterations with a robust post-processor to achieve superior logical error rates and reduced computational complexity.
  • The framework employs weight sharing and selective differential evolution strategies to decrease computational demands by up to 63% compared to conventional BP+OSD methods.

The Evolutionary Belief Propagation (EBP) Decoder is a quantum error correction (QEC) decoding framework that integrates message-passing algorithms with evolutionary metaheuristics for end-to-end performance optimization. EBP extends conventional belief propagation (BP) by introducing trainable, edge-specific weights into the BP equations, replacing gradient-based optimization with a differential evolution (DE) strategy for non-differentiable logical error rate objectives. When used in tandem with ordered statistics decoding (OSD) in a hybrid EBP+OSD configuration, EBP achieves superior logical error rates (LER) and reduced computational complexity, especially under stringent low-latency constraints such as a limit of five BP iterations (Kwak et al., 20 Dec 2025).

1. Weighted Belief Propagation Foundations

EBP is developed for decoding quantum stabilizer codes under depolarizing noise and operates in the framework of quaternary belief propagation (BP₄) for an n,k,d\llbracket n,k,d\rrbracket code with mm parity checks. Standard BP propagates log-likelihood ratios (LLRs) for qubit errors along the Tanner graph, with message updates for variable-to-check and check-to-variable steps. In EBP, each channel and incoming check message is multiplied by a distinct, trainable weight, yielding weighted sum-product operations. For any variable (qubit) vv and error type ζ{X,Z,Y}\zeta\in\{X,Z,Y\}, with channel LLR Lv(ζ)L_{v(\zeta)}, the EBP message update rules at iteration tt are:

mv(ζ)c(t)=wv(t)Lv(ζ)+cN(v)c,ζ,Sc,v=1wcv(t)mcv(t1)m_{v(\zeta)\to c}^{(t)} = \overline w^{(t)}_v\,L_{v(\zeta)} + \sum_{\substack{c'\in\mathcal N(v)\setminus c,\,\langle\zeta,S_{c',v}\rangle=1}} w_{c'\to v}^{(t)}\,m_{c'\to v}^{(t-1)}

mv(ζ)(t)=wv(t)Lv(ζ)+cN(v),ζ,Sc,v=1wcv(t)mcv(t)m_{v(\zeta)}^{(t)} = \overline w^{(t)}_v\,L_{v(\zeta)} + \sum_{\substack{c'\in\mathcal N(v),\,\langle\zeta,S_{c',v}\rangle=1}} w_{c'\to v}^{(t)}\,m_{c'\to v}^{(t)}

The weight parameters wv(t)\overline w^{(t)}_v ("channel weights") and wcv(t)w_{c\to v}^{(t)} ("edge weights") are subject to DE-based optimization. For all weights set to 1, the algorithm collapses to ordinary (unweighted) BP.

2. Differential Evolution-Based Weight Optimization

The non-differentiable nature of the post-OSD logical error rate, denoted LER+\mathrm{LER}_+, motivates the use of DE, a population-based metaheuristic. The DE optimizer seeks the weight set W={wv(t),wcv(t)}\mathcal W = \{\overline w_v^{(t)},\,w_{c\to v}^{(t)}\} that minimizes

LER+(W)=UFREBP(W)+FFREBP(W)UFROSD\mathrm{LER}_+(\mathcal W) = UFR_{\rm EBP}(\mathcal W) + FFR_{\rm EBP}(\mathcal W)\cdot UFR_{\rm OSD}

where UFRUFR and FFRFFR refer to the unflagged and flagged failure rates, respectively, of the EBP stage (with OSD invoked only upon flagged failure). DE combines mutation, crossover, and complexity-aware selection to evolve the weights. Weight sharing is used to constrain optimization dimensionality: all variable-node weights at a given iteration share a scalar, while check-to-variable edge weights are indexed by edge position within the check's degree class rather than by edge identity. This significantly reduces the parameter space, enabling practical DE training in less than one hour for surface-code distances up to d=11d=11.

3. Concatenation with Ordered Statistics Decoding (EBP+OSD)

OSD is employed as a high-performance post-processor that is triggered after EBP either reaches the maximum iteration limit (\overline \ell) or explicitly flags failure. Soft-decision inputs, specifically the posterior LLRs from EBP, are used to compute a reliability metric for each qubit:

rv=minζ{X,Z,Y}mv(ζ)r_v = \min_{\zeta\in\{X,Z,Y\}} |m_{v(\zeta)}|

OSD then sorts variables by rvr_v and performs Gaussian elimination on the submatrix of the stabilizer restricted to the kk most reliable qubits, followed by exhaustive enumeration of low-weight errors up to a specified OSD order pp. The lowest-weight error pattern consistent with the syndrome is selected. EBP+OSD pseudocode formally specifies BP and OSD invocation, reliability ordering, enumeration, and candidate selection based on syndrome matching.

4. Adaptations for Low-Latency Decoding

EBP+OSD is expressly designed for low-latency QEC by restricting the number of BP iterations to =5\overline \ell = 5, which is suited for hardware-level, real-time decoding environments. Weight sharing accelerates DE convergence and enables optimization within realistic time budgets. The DE selection mechanism incorporates secondary objectives: when solutions offer near-equal LER+\mathrm{LER}_+ (within 1%), lower OSD activation probability (i.e., lower FFREBPFFR_{\rm EBP}) is preferred, resulting in fewer expensive OSD calls. After completion of half the DE generations, a variant ("best/1/bin") is adopted for convergence to a global optimum.

5. Computational Complexity

EBP+OSD achieves substantial reductions in net decoding complexity under error models relevant for quantum LDPC and surface codes. The cost per EBP iteration is

CiterEBP=n(dv+1)+m(dc+1)=O(ndv+mdc)C_{\rm iter}^{\rm EBP} = n\,(d_v+1) + m\,(d_c+1) = \mathcal O(n\,d_v + m\,d_c)

with total EBP cost CEBP=CiterEBPC_{\rm EBP} = \overline\ell\, C_{\rm iter}^{\rm EBP}. OSD's dominant cost is approximately αn3\alpha n^3, where α\alpha encapsulates implementation details and OSD order pp. The average total cost is

Ctotal=CEBP+FFREBPCOSDC_{\rm total} = C_{\rm EBP} + \mathrm{FFR}_{\rm EBP}\, C_{\rm OSD}

For surface codes at depolarizing rate p=0.05p=0.05, EBP+OSD yields a net complexity reduction of 35–63% versus BP+OSD (Kwak et al., 20 Dec 2025).

6. Performance Benchmarks and Thresholds

EBP+OSD demonstrates improvements in both threshold and pseudo-threshold metrics compared to BP+OSD and unweighted decoders. Representative results (all for =5\overline\ell = 5 iterations):

Decoder Surf. Code Threshold QLDPC n=72n=72 QLDPC n=144n=144
BP+OSD 15.6% 3.8% 5.3%
EBP+OSD 16.5% 6.7% 9.2%
EBP+OSD (=10\overline\ell=10) 17.1% 7.0% 9.8%

At p=0.1p=0.1, the frame error rate for surface code decoding is reduced by up to an order of magnitude relative to BP+OSD in the five-iteration regime. The observed trend is an almost monotonic improvement in threshold values as \overline\ell increases, with the standard BP+OSD saturating much earlier. EBP+OSD further reduces the average iteration count and OSD invocation probability, enabling improvements in both logical error rates and decoding complexity (Kwak et al., 20 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Evolutionary Belief Propagation (EBP) Decoder.