Leveraging Code Structure to Improve Soft Output for GRAND, GCD, OSD, and SCL (2503.16677v1)
Abstract: In addition to a proposed codeword, error correction decoders that provide blockwise soft output (SO) return an estimate of the likelihood that the decoding is correct. Following Forney, such estimates are traditionally only possible for list decoders where the soft output is the likelihood that a decoding is correct given it is assumed to be in the list. Recently, it has been established that Guessing Random Additive Noise Decoding (GRAND), Guessing Codeword Decoding (GCD), Ordered Statistics Decoding (OSD), and Successive Cancellation List (SCL) decoding can provide more accurate soft output, even without list decoding. Central to the improvement is a per-decoding estimate of the likelihood that a decoding has not been found that can be readily calculated during the decoding process. Here we explore how linear codebook constraints can be employed to further enhance the precision of such SO. We evaluate performance by adapting a forecasting statistic called the Brier Score. Results indicate that the SO generated by the approach is essentially as accurate as the maximum a posteriori estimate.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.