CDF-ORBGRAND Algorithm
- The paper introduces CDF-ORBGRAND as a code-agnostic soft-decision decoder that efficiently enumerates error patterns using rank companding to closely approach ML decoding performance.
- It employs inverse reliability CDFs to map sorted channel metrics into weights, aligning decoding metrics with the true likelihood structure of binary-input channels.
- The algorithm supports BICM and diverse code families, offering hardware-friendly complexity with sub-microsecond decoding and capacity-achieving performance.
CDF-ORBGRAND is a code-agnostic, soft-detection decoding algorithm that exploits the cumulative distribution function (CDF) of empirical channel reliabilities to efficiently approach maximum-likelihood (ML) decoding performance for moderate-blocklength error correcting codes. It generalizes the Ordered Reliability Bits GRAND (ORBGRAND) approach by employing rank companding: mapping sorted channel reliability ranks to weights using the inverse reliability CDF, thereby aligning the decoding metric with the true likelihood structure of the channel. CDF-ORBGRAND achieves symmetric channel capacity for binary input memoryless channels and extends to BICM, attaining the BICM capacity under both ideal and non-ideal interleaving (Duffy et al., 2022, Li et al., 29 Nov 2025).
1. Problem Formulation and Channel Model
CDF-ORBGRAND targets the decoding of length- binary block codes of dimension (), transmitted over binary-input memoryless channels. Codewords are BPSK-modulated () and received as for i.i.d. Gaussian noise in the AWGN case (Duffy et al., 2022). The decoder observes
- The hard decision ,
- The soft reliability metric ,
- Absolute reliability .
The decoding objective is to identify the most probable noise pattern such that , i.e., recover given observations .
2. Reliability Ranking and CDF-Based Weighting
Reliability metrics are ranked in ascending order: , with permutation recording index order. The posterior error probability for bit is . For a candidate noise pattern ,
Patterns are thus ranked by their reliability-weighted sums .
CDF-ORBGRAND further quantizes the sorting via the inverse empirical reliability CDF: for sorted (or ), the ranks are mapped to weights where is the CDF of under input symmetry (Li et al., 29 Nov 2025).
3. Algorithmic Structure and Decoding Procedure
The core of CDF-ORBGRAND is the efficient enumeration of noise or error patterns in order of increasing total reliability cost. The algorithm proceeds as follows (Duffy et al., 2022, Li et al., 29 Nov 2025):
- Preprocessing: Fit the sorted reliability values with a piecewise-linear spline: for segment ,
Store quantized offsets, slopes, and segment anchors.
- Pattern Generation: For target weight , enumerate all -tuple segment weights such that , meeting segment constraints. Within each segment, generate all binary patterns of desired reliability weight using the "Landslide" integer-partition algorithm. Global patterns are assembled as concatenations across segments.
- Decoding: For each generated pattern , lift through to the original index set, test whether is a valid codeword (using a code-membership oracle), and return on first success.
- Stopping Rule: Decoding halts after a maximum threshold of patterns, chosen (for ML guarantees) above or, for URLLC energy-saving, possibly smaller.
Offline, CDF-ORBGRAND uses a precomputed exhaustive or truncated error-pattern list ordered by or, equivalently, the companded weights . In runtime, each query involves only bit-flipping and a code check (Li et al., 29 Nov 2025).
4. Rank Companding and Information-Theoretic Optimality
CDF-ORBGRAND distinguishes itself by precisely companding error-pattern ranks via the channel reliability CDF. Empirically, for large , normalized rank approximates , and thus closely tracks the true soft reliability . The error pattern search thus matches near-ML order at low computational overhead.
In the mismatched decoding (GMI) framework, with unified decoding metric
for true codeword , the expected value and variance of converge to specific integrals involving the channel law. For any incorrect codeword, a Chernoff bound yields achievable rates under the CDF-ORBGRAND metric. The supremum occurs at , showing that the maximum achievable rate coincides exactly with the channel's mutual information . Thus, CDF-ORBGRAND is capacity-achieving under symmetric binary inputs (Li et al., 29 Nov 2025).
5. Complexity and Hardware Implementation
CDF-ORBGRAND is designed for efficient hardware realization:
- Reliability sorting: Achieved via bitonic or odd-even merge sort networks (), or approximate min/max trees.
- Piecewise-linear model: Maintains small integer tables (offsets, slopes, anchors).
- Pattern generation: Integer partitions leverage local, SIMD-amenable logic.
- Cartesian product over segments: Interleaved pattern streams accommodated in parallel FIFOs.
- Code-membership check: For linear codes, syndrome computation and zero-check are executed in parallel.
All pipeline stages can be replicated for multi-cycle parallelism. System throughput and latency scale with the area devoted to hardware replication. Average-case query complexity is , but practical soft-detection reduces this: for CA-Polar[256,234], 3-line ORBGRAND reaches queries at BLER and at BLER , enabling sub-s decode times. Worst-case remains bounded by (Duffy et al., 2022).
6. Extension to BICM and Universality
CDF-ORBGRAND extends naturally to bit-interleaved coded modulation (BICM) systems. For each symbol, bit-LLRs are ranked globally over indices. Segment-wise CDFs are averaged to to produce companded ranks . The unified metric generalizes to
The error pattern list is managed identically. The GMI analysis confirms that CDF-ORBGRAND achieves the sum bit-channel mutual information, the classical BICM capacity. The decoder exhibits universality, with virtually identical performance for RLC, BCH, and CRC codes of equivalent length/rate under ORBGRAND decoding (Li et al., 29 Nov 2025, Duffy et al., 2022).
7. Performance Characteristics and Impact
Empirical results establish that, for CA-Polar256,234:
- 3-line ORBGRAND outperforms CA-SCL (list size 16, 5G NR) by 0.4 dB at BLER .
- 3-line ORBGRAND lies within 0.1 dB of ML benchmark (SGRAND) down to BLER .
- One-line ORBGRAND lags by 0.6 dB but remains superior to CA-SCL.
- Equivalent performance is observed for RLC, BCH, and CRC classes.
Hardware implementations demonstrate average query counts as low as at BLER , facilitating sub-s decoding with worst-case sub-ms latency. The algorithm is suitable for URLLC and energy-efficient soft-detection: the choice of stopping threshold trades minor BLER penalty for energy savings. Complexity is comparable or lower than leading code-specific soft-decision decoders while offering near-ML accuracy and full code universality (Duffy et al., 2022).
In summary, CDF-ORBGRAND leverages reliability-driven rank companding and integer-partition-based error pattern enumeration to yield a capacity-achieving universal soft-decision decoder, with practical implementation, hardware efficiency, and robust empirical performance across block code families and modulation schemes (Li et al., 29 Nov 2025, Duffy et al., 2022).