Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 88 tok/s Pro
GPT OSS 120B 471 tok/s Pro
Kimi K2 207 tok/s Pro
2000 character limit reached

Learning to Decode Linear Codes Using Deep Learning (1607.04793v2)

Published 16 Jul 2016 in cs.IT, cs.LG, cs.NE, and math.IT

Abstract: A novel deep learning method for improving the belief propagation algorithm is proposed. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property allows us to learn only a single codeword instead of exponential number of code-words. Improvements over the belief propagation algorithm are demonstrated for various high density parity check codes.

Citations (438)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces a novel approach that integrates deep neural networks with belief propagation to enhance decoding for HDPC codes.
  • It employs a soft Tanner graph with trainable edge weights, allowing efficient training on a single all-zero codeword.
  • Results demonstrate up to a 0.9 dB performance gain and a potential 10x reduction in complexity compared to standard BP decoding.

Deep Learning Enhancement of Belief Propagation Decoding for Linear Codes

This paper introduces a novel approach to decoding linear codes using deep learning, specifically refining the belief propagation (BP) algorithm widely utilized in processing high-density parity-check (HDPC) codes. The authors propose a deep neural network (DNN) model that assigns trainable weights to the edges of a Tanner graph, thereby optimizing message-passing performance. Importantly, the proposed method preserves the BP algorithm’s desirable properties, ensuring that decoding performance remains independent of transmitted codewords.

Methodology

The paper focuses on enhancing the BP algorithm for HDPC codes, known for their performance proximity to Shannon capacity with long block lengths but poorer performance for shorter ones as compared to maximum likelihood (ML) decoding. To address this, the authors embed the Tanner graph representation within a neural network framework, where training adjusts edge weights in the graph, transforming it into what they describe as a "soft" Tanner graph.

Unlike conventional neural network-based decoders that require an extensive dataset of codewords, this method exploits BP's symmetry property. This allows training on a single codeword — specifically the all-zero codeword — significantly reducing the training dataset size, given the decoder's output remains codeword-independent. The neural network consists of alternating layers corresponding to variable-to-check and check-to-variable node communications, respectively.

Experimental Setup

Training was conducted using the TensorFlow framework, deploying cross-entropy as the loss function and employing the RMSPROP optimization method. The experiments were carried out on several BCH codes, commonly used linear block codes, including BCH(15,11), BCH(63,36), BCH(63,45), and BCH(127,106). The neural network featured ten hidden layers corresponding to five full BP iterations.

Results

The proposed decoder demonstrated consistent improvements over the traditional BP algorithm, achieving close-to-ML performance particularly with the smaller BCH(15,11) code. In larger codes such as BCH(63,36), BCH(63,45), and BCH(127,106), the improvements were prominent in the high SNR regime, exhibiting up to a 0.9 dB gain. The studies also identified a notable reduction in computational complexity, achieving parity with 50-iteration BP through 5 neural network iterations, indicating a potential factor of 10 in complexity reduction.

Further analysis revealed the weights trained within the neural network display a normal-like distribution, contrasting with the binary nature demanded by BP. This indicates the DNN's capability to learn and adjust the influence of different elements within the code's Tanner graph.

Implications and Future Directions

The work represents a significant advancement in the incorporation of deep learning into channel coding, setting the foundation for future innovations in decoder designs. The findings underscore the potential to leverage DNNs to simplify decoders' complexity while achieving superior error rates. Future exploration may involve enhancing neural architectures and integrating other decoding schemes to optimize outcomes. Furthermore, examining the relationship between these neural networks' efficacy and the structural parameters of parity-check matrices could elucidate further enhancements in decoding performance.

This paper is pivotal in illustrating how machine learning can intersect with classical coding theory, opening avenues for hybrid approaches that offer both performance and efficiency gains.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.