Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The ADMM penalized decoder for LDPC codes (1409.5140v1)

Published 17 Sep 2014 in cs.IT and math.IT

Abstract: Linear programming (LP) decoding for low-density parity-check (LDPC) codes proposed by Feldman et al. is shown to have theoretical guarantees in several regimes and empirically is not observed to suffer from an error floor. However at low signal-to-noise ratios (SNRs), LP decoding is observed to have worse error performance than belief propagation (BP) decoding. In this paper, we seek to improve LP decoding at low SNRs while still achieving good high SNR performance. We first present a new decoding framework obtained by trying to solve a non-convex optimization problem using the alternating direction method of multipliers (ADMM). This non-convex problem is constructed by adding a penalty term to the LP decoding objective. The goal of the penalty term is to make "pseudocodewords", which are the non-integer vertices of the LP relaxation to which the LP decoder fails, more costly. We name this decoder class the "ADMM penalized decoder". In our simulation results, the ADMM penalized decoder with $\ell_1$ and $\ell_2$ penalties outperforms both BP and LP decoding at all SNRs. For high SNR regimes where it is infeasible to simulate, we use an instanton analysis and show that the ADMM penalized decoder has better high SNR performance than BP decoding. We also develop a reweighted LP decoder using linear approximations to the objective with an $\ell_1$ penalty. We show that this decoder has an improved theoretical recovery threshold compared to LP decoding. In addition, we show that the empirical gain of the reweighted LP decoder is significant at low SNRs.

Citations (79)

Summary

We haven't generated a summary for this paper yet.