Efficient Active Deep Decoding of Linear Codes using Importance Sampling (2310.13275v1)
Abstract: The quality and quantity of data used for training greatly influence the performance and effectiveness of deep learning models. In the context of error correction, it is essential to generate high-quality samples that are neither excessively noisy nor entirely correct but close to the decoding region's decision boundary. To accomplish this objective, this paper utilizes a restricted version of a recent result on Importance Sampling (IS) distribution for fast performance evaluation of linear codes. The IS distribution is used over the segmented observation space and integrated with active learning. This combination allows for the iterative generation of samples from the shells whose acquisition functions, defined as the error probabilities conditioned on each shell, fall within a specific range. By intelligently sampling based on the proposed IS distribution, significant improvements are demonstrated in the performance of BCH(63,36) and BCH(63,45) codes with cycle-reduced parity-check matrices. The proposed IS-based-active Weight Belief Propagation (WBP) decoder shows improvements of up to 0.4dB in the waterfall region and up to 1.9dB in the error-floor region of the BER curve, over the conventional WBP. This approach can be easily adapted to generate efficient samples to train any other deep learning-based decoder.
- I. Be’ery, N. Raviv, T. Raviv, and Y. Be’ery, “Active deep decoding of linear codes,” IEEE Trans Commun, vol. 68, no. 2, pp. 728–736, 2020.
- T. Gruber, S. Cammerer, J. Hoydis, and S. Ten Brink, “On deep learning-based channel decoding,” in 2017 51st CISS. IEEE, 2017, pp. 1–6.
- H. Kim, Y. Jiang, R. Rana, S. Kannan, S. Oh, and P. Viswanath, “Communication algorithms via deep learning,” arXiv preprint arXiv:1805.09317, 2018.
- E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear codes using deep learning,” in 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2016, pp. 341–346.
- E. Nachmani, E. Marciano, L. Lugosch, W. J. Gross, D. Burshtein, and Y. Be’ery, “Deep learning methods for improved decoding of linear codes,” IEEE J. Sel. Top. Signal Process, vol. 12, no. 1, pp. 119–131, 2018.
- F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 498–519, 2001.
- J. Pan and W. H. Mow, “Fast performance evaluation of linear block codes over memoryless continuous channels,” arXiv preprint arXiv:2205.04943, 2022.
- J. Pan, “New performance evaluation methods for block codes over memoryless channels with efficiency analysis,” Ph.D. dissertation, Hong Kong University of Science and Technology, 2022.
- Y. Gal, R. Islam, and Z. Ghahramani, “Deep bayesian active learning with image data,” in International conference on machine learning. PMLR, 2017, pp. 1183–1192.
- M. Helmling, S. Scholl, F. Gensheimer, T. Dietz, K. Kraft, S. Ruzika, and N. Wehn, “Database of Channel Codes and ML Simulation Results,” www.uni-kl.de/channel-codes, 2019.
- T. R. Halford and K. M. Chugg, “Random redundant soft-in soft-out decoding of linear block codes,” in 2006 IEEE International Symposium on Information Theory. IEEE, 2006, pp. 2230–2234.