Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 158 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 112 tok/s Pro
Kimi K2 177 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Fine-tuning ORBGRAND with Very Few Channel Soft Values (2507.08696v1)

Published 11 Jul 2025 in cs.IT and math.IT

Abstract: Guessing random additive noise decoding (GRAND) is a universal decoding paradigm that decodes by repeatedly testing error patterns until identifying a codeword, where the ordering of tests is generated by the received channel values. On one hand, while testing error patterns in a descending order of posterior probabilities leads to maximum likelihood decoding, its implementation complexity is prohibitive. On the other hand, testing error patterns with a prescribed set of error patterns permuted by the ranking among magnitudes of log-likelihood ratios (i.e., ordered reliability bits, ORB) enables efficient implementation, but results in performance loss for finite-length codes. Aiming at harnessing the strengths of these two approaches, this work proposes a fine-tuning method to improve ORBGRAND, adjusting the ordering of tests with the aid of very few exact channel soft values. This method is based on a metric for assessing the ``well-orderedness'' of error patterns. The metric is studied via the lens of the asymptotic theory of integer partitioning, which provides highly accurate estimation in numerical experiments. The metric then leads to an effective identification of fine-tuning to conduct, at the cost of a negligible increment of complexity. Numerical experiments demonstrate that the proposed fine-tuning method achieves a substantial performance enhancement compared with ORBGRAND.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.