Papers
Topics
Authors
Recent
2000 character limit reached

CDF-ORBGRAND Algorithm

Updated 6 December 2025
  • The paper introduces CDF-ORBGRAND as a code-agnostic soft-decision decoder that efficiently enumerates error patterns using rank companding to closely approach ML decoding performance.
  • It employs inverse reliability CDFs to map sorted channel metrics into weights, aligning decoding metrics with the true likelihood structure of binary-input channels.
  • The algorithm supports BICM and diverse code families, offering hardware-friendly complexity with sub-microsecond decoding and capacity-achieving performance.

CDF-ORBGRAND is a code-agnostic, soft-detection decoding algorithm that exploits the cumulative distribution function (CDF) of empirical channel reliabilities to efficiently approach maximum-likelihood (ML) decoding performance for moderate-blocklength error correcting codes. It generalizes the Ordered Reliability Bits GRAND (ORBGRAND) approach by employing rank companding: mapping sorted channel reliability ranks to weights using the inverse reliability CDF, thereby aligning the decoding metric with the true likelihood structure of the channel. CDF-ORBGRAND achieves symmetric channel capacity for binary input memoryless channels and extends to BICM, attaining the BICM capacity under both ideal and non-ideal interleaving (Duffy et al., 2022, Li et al., 29 Nov 2025).

1. Problem Formulation and Channel Model

CDF-ORBGRAND targets the decoding of length-nn binary block codes of dimension kk (R=k/nR = k/n), transmitted over binary-input memoryless channels. Codewords c{0,1}nc \in \{0,1\}^{n} are BPSK-modulated (xi=2ci1x_i = 2c_i - 1) and received as Yi=xi+NiY_i = x_i + N_i for i.i.d. Gaussian noise NiN(0,σ2)N_i \sim \mathcal{N}(0, \sigma^2) in the AWGN case (Duffy et al., 2022). The decoder observes

  • The hard decision yi=1Yi>0y_i = 1_{Y_i > 0},
  • The soft reliability metric λi=LLR(Yi)=logfYC(Yi1)fYC(Yi0)Yi\lambda_i = \mathrm{LLR}(Y_i) = \log\frac{f_{Y|C}(Y_i|1)}{f_{Y|C}(Y_i|0)} \propto Y_i,
  • Absolute reliability i=λi\ell_i = |\lambda_i|.

The decoding objective is to identify the most probable noise pattern Z{0,1}nZ \in \{0,1\}^n such that y=cZy = c \oplus Z, i.e., recover cc given observations YY.

2. Reliability Ranking and CDF-Based Weighting

Reliability metrics i\ell_i are ranked in ascending order: (1)(2)(n)\ell_{(1)} \leq \ell_{(2)} \leq \cdots \leq \ell_{(n)}, with permutation π\pi recording index order. The posterior error probability for bit ii is Bi=1/(1+ei)B_i = 1/(1+e^{\ell_i}). For a candidate noise pattern z{0,1}nz \in \{0,1\}^n,

P(Z=z)exp(i=1nizi)P(Z = z) \propto \exp\left(-\sum_{i=1}^n \ell_i z_i\right)

Patterns are thus ranked by their reliability-weighted sums Rel(z)=i=1n(i)zi\text{Rel}(z) = \sum_{i=1}^n \ell_{(i)} z_i.

CDF-ORBGRAND further quantizes the sorting via the inverse empirical reliability CDF: for sorted Ti|T_i| (or λi|\lambda_i|), the ranks rir_i are mapped to weights γi=Ψ1(rin+1)\gamma_i = \Psi^{-1}\left(\frac{r_i}{n+1}\right) where Ψ(t)\Psi(t) is the CDF of T|T| under input symmetry (Li et al., 29 Nov 2025).

3. Algorithmic Structure and Decoding Procedure

The core of CDF-ORBGRAND is the efficient enumeration of noise or error patterns in order of increasing total reliability cost. The algorithm proceeds as follows (Duffy et al., 2022, Li et al., 29 Nov 2025):

  • Preprocessing: Fit the sorted reliability values ((1),,(n))(\ell_{(1)}, \dots, \ell_{(n)}) with a piecewise-linear spline: for segment jj,

    ^(i)=Jj1+βj(iIj1),for Ij1<iIj.\hat{\ell}_{(i)} = J_{j-1} + \beta_j (i - I_{j-1}), \quad \text{for } I_{j-1} < i \leq I_j.

Store quantized offsets, slopes, and segment anchors.

  • Pattern Generation: For target weight WW, enumerate all mm-tuple segment weights (W1,,Wm)(W_1, \ldots, W_m) such that jWj=W\sum_j W_j = W, meeting segment constraints. Within each segment, generate all binary patterns of desired reliability weight using the "Landslide" integer-partition algorithm. Global patterns are assembled as concatenations across segments.
  • Decoding: For each generated pattern zz, lift through π\pi to the original index set, test whether yzy \oplus z is a valid codeword (using a code-membership oracle), and return on first success.
  • Stopping Rule: Decoding halts after a maximum threshold AA of patterns, chosen (for ML guarantees) above 2nk2^{n-k} or, for URLLC energy-saving, possibly smaller.

Offline, CDF-ORBGRAND uses a precomputed exhaustive or truncated error-pattern list PP ordered by iri\sum_i r_i or, equivalently, the companded weights iγi\sum_i \gamma_i. In runtime, each query involves only bit-flipping and a code check (Li et al., 29 Nov 2025).

4. Rank Companding and Information-Theoretic Optimality

CDF-ORBGRAND distinguishes itself by precisely companding error-pattern ranks via the channel reliability CDF. Empirically, for large nn, normalized rank ri/(n+1)r_i/(n+1) approximates Ψ(Ti)\Psi(|T_i|), and thus γi=Ψ1(ri/(n+1))\gamma_i = \Psi^{-1}(r_i/(n+1)) closely tracks the true soft reliability Ti|T_i|. The error pattern search thus matches near-ML order at low computational overhead.

In the mismatched decoding (GMI) framework, with unified decoding metric

D(w)=1ni=1nγi1[sgn(Ti)xi(w)<0],D(w) = \frac{1}{n} \sum_{i=1}^n \gamma_i \, 1[\mathrm{sgn}(T_i) x_i(w) < 0],

for true codeword w=1w = 1, the expected value and variance of D(1)D(1) converge to specific integrals involving the channel law. For any incorrect codeword, a Chernoff bound yields achievable rates under the CDF-ORBGRAND metric. The supremum occurs at θ=1\theta = -1, showing that the maximum achievable rate coincides exactly with the channel's mutual information I(X;Y)I(X;Y). Thus, CDF-ORBGRAND is capacity-achieving under symmetric binary inputs (Li et al., 29 Nov 2025).

5. Complexity and Hardware Implementation

CDF-ORBGRAND is designed for efficient hardware realization:

  • Reliability sorting: Achieved via bitonic or odd-even merge sort networks (O(nlog2n)O(n \log^2 n)), or approximate min/max trees.
  • Piecewise-linear model: Maintains small integer tables (offsets, slopes, anchors).
  • Pattern generation: Integer partitions leverage local, SIMD-amenable logic.
  • Cartesian product over segments: Interleaved pattern streams accommodated in parallel FIFOs.
  • Code-membership check: For linear codes, syndrome computation and zero-check are executed in parallel.

All pipeline stages can be replicated for multi-cycle parallelism. System throughput and latency scale with the area devoted to hardware replication. Average-case query complexity is E[T]2nkE[T] \approx 2^{n-k}, but practical soft-detection reduces this: for CA-Polar[256,234], 3-line ORBGRAND reaches 3×103\sim3\times10^3 queries at BLER =103= 10^{-3} and 3×102\sim3\times10^2 at BLER =104= 10^{-4}, enabling sub-μ\mus decode times. Worst-case remains bounded by 2nk2^{n-k} (Duffy et al., 2022).

6. Extension to BICM and Universality

CDF-ORBGRAND extends naturally to bit-interleaved coded modulation (BICM) systems. For each symbol, mm bit-LLRs Ti,jT_{i,j} are ranked globally over mNmN indices. Segment-wise CDFs Ψj\Psi_j are averaged to Ψˉ(t)=1mjΨj(t)\bar\Psi(t) = \frac{1}{m}\sum_j \Psi_j(t) to produce companded ranks γi,j=Ψˉ1(Ri,j/(mN+1))\gamma_{i,j} = \bar\Psi^{-1}(R_{i,j}/(mN+1)). The unified metric generalizes to

D(w)=1mNi=1Nj=1mγi,j1[sgn(Ti,j)xi,j(w)<0].D(w) = \frac{1}{mN} \sum_{i=1}^N \sum_{j=1}^m \gamma_{i,j}\, 1[\mathrm{sgn}(T_{i,j})x_{i,j}(w) < 0].

The error pattern list is managed identically. The GMI analysis confirms that CDF-ORBGRAND achieves the sum bit-channel mutual information, the classical BICM capacity. The decoder exhibits universality, with virtually identical performance for RLC, BCH, and CRC codes of equivalent length/rate under ORBGRAND decoding (Li et al., 29 Nov 2025, Duffy et al., 2022).

7. Performance Characteristics and Impact

Empirical results establish that, for CA-Polar256,234:

  • 3-line ORBGRAND outperforms CA-SCL (list size 16, 5G NR) by \approx0.4 dB at BLER 103\approx 10^{-3}.
  • 3-line ORBGRAND lies within 0.1 dB of ML benchmark (SGRAND) down to BLER 104\approx 10^{-4}.
  • One-line ORBGRAND lags by \approx0.6 dB but remains superior to CA-SCL.
  • Equivalent performance is observed for RLC, BCH, and CRC classes.

Hardware implementations demonstrate average query counts as low as 3×102\sim3\times10^2 at BLER =104= 10^{-4}, facilitating sub-μ\mus decoding with worst-case sub-ms latency. The algorithm is suitable for URLLC and energy-efficient soft-detection: the choice of stopping threshold AA trades minor BLER penalty for energy savings. Complexity is comparable or lower than leading code-specific soft-decision decoders while offering near-ML accuracy and full code universality (Duffy et al., 2022).

In summary, CDF-ORBGRAND leverages reliability-driven rank companding and integer-partition-based error pattern enumeration to yield a capacity-achieving universal soft-decision decoder, with practical implementation, hardware efficiency, and robust empirical performance across block code families and modulation schemes (Li et al., 29 Nov 2025, Duffy et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to CDF-ORBGRAND Algorithm.