Papers
Topics
Authors
Recent
Search
2000 character limit reached

Efficient Low-Memory Fast Stack Decoding with Variance Polarization for PAC Codes

Published 8 Sep 2025 in cs.IT and math.IT | (2509.07231v1)

Abstract: Polarization-adjusted convolutional (PAC) codes have recently emerged as a promising class of error-correcting codes, achieving near-capacity performance particularly in the short block-length regime. In this paper, we propose an enhanced stack decoding algorithm for PAC codes that significantly improves parallelization by exploiting specialized bit nodes, such as rate-0 and rate-1 nodes. For a rate-1 node with $N_0$ leaf nodes in its corresponding subtree, conventional stack decoding must either explore all $2{N_0}$ paths, or, same as in fast list decoding, restrict attention to a constant number of candidate paths. In contrast, our approach introduces a pruning technique that discards wrong paths with a probability exponentially approaching zero, retaining only those whose path metrics remain close to their expected mean values. Furthermore, we propose a novel approximation method for estimating variance polarization under the binary-input additive white Gaussian noise (BI-AWGN) channel. Leveraging these approximations, we develop an efficient stack-pruning strategy that selectively preserves decoding paths whose bit-metric values align with their expected means. This targeted pruning substantially reduces the number of active paths in the stack, thereby decreasing both decoding latency and computational complexity. Numerical results demonstrate that for a PAC(128,64) code, our method achieves up to a 70% reduction in the average number of paths without degrading error-correction performance.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.