Fine-tuning ORBGRAND with Very Few Channel Soft Values (2507.08696v1)
Abstract: Guessing random additive noise decoding (GRAND) is a universal decoding paradigm that decodes by repeatedly testing error patterns until identifying a codeword, where the ordering of tests is generated by the received channel values. On one hand, while testing error patterns in a descending order of posterior probabilities leads to maximum likelihood decoding, its implementation complexity is prohibitive. On the other hand, testing error patterns with a prescribed set of error patterns permuted by the ranking among magnitudes of log-likelihood ratios (i.e., ordered reliability bits, ORB) enables efficient implementation, but results in performance loss for finite-length codes. Aiming at harnessing the strengths of these two approaches, this work proposes a fine-tuning method to improve ORBGRAND, adjusting the ordering of tests with the aid of very few exact channel soft values. This method is based on a metric for assessing the ``well-orderedness'' of error patterns. The metric is studied via the lens of the asymptotic theory of integer partitioning, which provides highly accurate estimation in numerical experiments. The metric then leads to an effective identification of fine-tuning to conduct, at the cost of a negligible increment of complexity. Numerical experiments demonstrate that the proposed fine-tuning method achieves a substantial performance enhancement compared with ORBGRAND.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.