Papers
Topics
Authors
Recent
Search
2000 character limit reached

Evolutionary Belief Propagation (EBP)

Updated 9 January 2026
  • Evolutionary Belief Propagation (EBP) is a quantum error correction decoder that enhances standard BP by adaptively adjusting multiplicative weights via differential evolution.
  • It employs a weight-sharing scheme and per-iteration parameterization to mitigate detrimental effects in Tanner graphs and improve reliability.
  • Empirical evaluations show that combining EBP with ordered statistics decoding (OSD) reduces logical error rates and computational cost significantly.

Evolutionary Belief Propagation (EBP) is a quantum error correction decoding algorithm that extends standard belief propagation (BP) through the introduction of trainable multiplicative weights optimized by differential evolution (DE). EBP is designed for low-latency, high-performance decoding of quantum stabilizer codes, and is particularly effective when paired with ordered statistics decoding (OSD) as a post-processing stage. Empirical evaluations in quantum surface codes and quantum low-density parity-check (QLDPC) codes demonstrate that EBP+OSD provides significant gains in logical error rates and computational efficiency under stringent iteration constraints (Kwak et al., 20 Dec 2025).

1. Decoding Problem and Standard BP Framework

The error correction scenario is defined for an [[n,k,d]][[n, k, d]] stabilizer code represented by a set of independent generators S=S1,...,Sm\mathcal{S} = \langle S_1, ..., S_m \rangle, whose binary symplectic mapping yields an m×nm \times n parity-check matrix HH. Given an unknown Pauli error E\mathcal{E} on nn qubits, syndrome measurements sF2m\mathbf{s} \in \mathbb{F}_2^m are acquired by evaluating si=E,Sis_i = \langle \mathcal{E}, S_i \rangle.

The decoder's objective is to estimate an error E^\hat{\mathcal{E}} such that He=smod2H e = \mathbf{s} \mod 2, ideally correcting up to a stabilizer. BP is implemented on the Tanner graph of HH, with variable nodes (VNs) and check nodes (CNs). In the quaternary BP formulation (BP4\mathrm{BP}_4), each VN vv maintains a log-likelihood ratio (LLR) vector Lv=(Lv(I),Lv(X),Lv(Z),Lv(Y))\mathbf{L}_v = (L_v(I), L_v(X), L_v(Z), L_v(Y)), initialized for a depolarizing channel of error rate pp as Lv(ζ)=ln1pp/3L_v(\zeta) = \ln \frac{1-p}{p/3} for ζ=X,Z,Y\zeta = X, Z, Y, and Lv(I)=0L_v(I) = 0.

Standard BP iteratively exchanges messages mvc()(ζ)m_{v\to c}^{(\ell)}(\zeta) and mcv()(ζ)m_{c\to v}^{(\ell)}(\zeta) between VNs and CNs using min-sum rules. Posterior LLRs are computed after each iteration to form a tentative error estimate e^v()=argminζmv()(ζ)\hat{e}_v^{(\ell)} = \arg\min_{\zeta} m_v^{(\ell)}(\zeta). Decoding proceeds until a matching syndrome is found or until the maximum iteration budget ˉ\bar{\ell} is reached, in which case a flagged failure is declared.

2. Trainable Weight Parameterization

EBP extends BP by inserting adaptive, learnable weights in the VN update step. For each iteration \ell, the VN-to-CN message update becomes:

mvc()(ζ)=wv()Lv(ζ)+cN(v)cζ,Sc,v=1wcv()mcv()(ζ)m^{(\ell)}_{v\to c}(\zeta) = \overline{w}_v^{(\ell)} L_v(\zeta) + \sum_{c' \in N(v) \setminus c \atop \langle \zeta, S_{c', v} \rangle = 1} w_{c'\to v}^{(\ell)} m_{c'\to v}^{(\ell)}(\zeta)

where wv()\overline{w}_v^{(\ell)} (channel weight) and wcv()w_{c'\to v}^{(\ell)} (check-to-variable weight) are real-valued scaling parameters. The same weights are applied in the posterior computation. Standard BP is recovered for all weights set to unity.

To control parameter growth and exploit code symmetry, EBP employs a weight-sharing scheme: (i) all VNs at a given iteration share the same channel weight w()\overline{w}^{(\ell)}, and (ii) CN-to-VN weights are indexed only by the CN degree rr, with a unique weight wr()w_r^{(\ell)} per edge index and iteration. The effective parameter set is:

W={w(),w1(),,wrˉ()=1,,ˉ}\mathcal{W} = \bigl\{\overline{w}^{(\ell)}, w_1^{(\ell)}, \ldots, w_{\bar{r}}^{(\ell)} \mid \ell = 1, \ldots, \bar{\ell}\bigr\}

yielding a compact search space of size ˉ(rˉ+1)\bar{\ell}(\bar{r} + 1).

3. Differential Evolution for Weight Optimization

The non-differentiable nature of the post-processing (OSD) renders gradient-based training ineffective. EBP instead adopts differential evolution (DE), a population-based, derivative-free metaheuristic. The fitness objective is the overall logical error rate (LER) post-OSD:

LER+(W)=UFREBP(W)+FFREBP(W)×UFROSD\mathrm{LER}_+(\mathcal{W}) = \mathrm{UFR}_{\rm EBP}(\mathcal{W}) + \mathrm{FFR}_{\rm EBP}(\mathcal{W}) \times \mathrm{UFR}_{\rm OSD}

The DE process maintains a population {Wt}t=1T\{\mathcal{W}_t\}_{t=1}^T. Each generation applies mutation, crossover (with probability pc=0.7p_c = 0.7), and selection operations to evolve the parameter set. The evaluation at each generation uses a Monte Carlo estimate (e.g., 10510^5 trials at pe=0.1p_e = 0.1). If two parameter sets have LERs within 1%, preference is given to the set with lower EBP flagged failure rate (FFR), thereby further reducing OSD invocation.

After a fixed number of generations (e.g., G=200G = 200), the best-performing parameter set is selected; a strategy switch to “best/1/bin” may be employed midway to accelerate convergence. This process directly optimizes the end-to-end quantum decoder performance rather than the isolated BP stage.

4. Ordered Statistics Decoding Post-Processing

If EBP fails to decode within ˉ\bar{\ell} iterations (flagged failure), OSD serves as a nonlinear post-decoder. The procedure operates as follows:

  1. Each VN is assigned a reliability measure (e.g., Rv=minζ{X,Z,Y}mv(ζ)R_v = \min_{\zeta \in \{X, Z, Y\}} m_v(\zeta)).
  2. VNs are sorted by reliability, columns of HH are permuted, and the matrix is brought to systematic form.
  3. For the most reliable set of kk positions, the decoder solves the syndrome for the complementary set, then exhaustively searches over small-weight flips on the reliable set to enforce He=sH e = \mathbf{s}.
  4. The minimum-weight compatible solution is returned.

EBP is trained to reduce both overall LER and the probability of reaching the OSD fallback stage, achieving a balance between pre-decoder complexity and post-decoder success.

5. Empirical Performance and Complexity

Performance is quantified by logical error rate (LER\mathrm{LER}) and OSD activation probability (FFR\mathrm{FFR}). The results under a depolarizing channel are summarized below:

Code Family BP+OSD Threshold EBP+OSD Threshold Pseudo-threshold (BP/EBP) Cost Reduction
Surface ˉ=5\bar\ell=5" title="" rel="nofollow" data-turbo="false" class="assistant-link">[d²,1,d] 15.6% 16.5% 12.9%/13.7% (d=7) 35–63% (all tested)
Bicycle QLDPC [[n,k,d]], n=72 3.8% 6.7%
Bicycle QLDPC [[n,k,d]], n=144 5.3% 9.2%

EBP+OSD achieves monotonic improvements in threshold and pseudo-threshold as ˉ\bar{\ell} increases. In all tested codes, EBP+OSD reduces average BP iterations and OSD activation, leading to a 35–63% reduction in total computational cost compared to standard BP+OSD under equivalent parameters.

6. Algorithmic Insights and Practical Implications

  • The use of per-iteration, edge-indexed scaling weights in the VN update allows EBP to adaptively mitigate the influence of detrimental cycles and trapping sets in the Tanner graph.
  • DE directly targets the non-differentiable, end-to-end fitness LER+(W)\mathrm{LER}_+(\mathcal{W}), rather than optimizing intermediate or surrogate objectives.
  • The weight-sharing scheme maintains a modest parameter dimension, enabling DE to remain tractable and reliable in convergence; the learned weights display robustness to code distance and QLDPC block length, allowing reuse within code families.
  • The overall EBP+OSD decoder achieves state-of-the-art thresholds using very few BP iterations (ˉ=5\bar{\ell}=5–10), rarely needing OSD post-processing, and outperforms traditional BP+OSD even under heavily tuned, high-iteration settings.
  • This low-latency, linear-complexity pre-processing in BP, combined with rare, high-complexity OSD post-decoding, is well-suited for practical, real-time quantum error correction requirements.

7. Summary and Research Context

Evolutionary Belief Propagation constitutes a principled augmentation of BP for quantum error correction, introducing explicit trainable parameters that control the propagation dynamics. The optimization of these weights via differential evolution, guided by direct minimization of logical error rates post-OSD, leads to substantial gains in both reliability and efficiency. The modular, data-driven nature of EBP’s weight optimization, combined with the judicious integration of OSD, positions EBP+OSD as an effective, scalable decoding paradigm for modern quantum codes under stringent latency and computational resource constraints (Kwak et al., 20 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Evolutionary Belief Propagation (EBP).