Latent-attention Based Transformer for Near ML Polar Decoding in Short-code Regime
Abstract: Transformer architectures have emerged as promising deep learning (DL) tools for modeling complex sequence-to-sequence interactions in channel decoding. However, current transformer-based decoders for error correction codes (ECCs) demonstrate inferior performance and generalization capabilities compared to conventional algebraic decoders, especially in short-code regimes. In this work, we propose a novel latent-attention based transformer (LAT) decoder for polar codes that addresses the limitations on performance and generalization through three pivotal innovations. First, we develop a latent-attention mechanism that supersedes the conventional self-attention mechanism. This architectural modification enables independent learning of the Query and Key matrices for code-aware attention computation, decoupling them from the Value matrix to emphasize position-wise decoding interactions while reducing context correlation interference. Second, we devise an advanced training framework incorporating three synergistic components: entropy-aware importance sampling that emphasizes low-probability regions in the signal constellation space, experience reflow that introduces empirical labels to improve characterization of decoding boundaries, and dynamic label smoothing for likelihood-based regularization. Third, we propose a code-aware mask scheme which allows dynamic adaptation for varying code configurations. Numerical evaluations demonstrate that the proposed LAT decoder achieves near maximum-likelihood (ML) performance in terms of both bit error rate (BER) and block error rate (BLER) for short-length polar codes. Furthermore, the architecture exhibits robust generalization capabilities across diverse code rates and code lengths.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.