Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Uncertainty-Aware and Reliable Neural MIMO Receivers via Modular Bayesian Deep Learning (2302.02436v4)

Published 5 Feb 2023 in cs.IT, eess.SP, and math.IT

Abstract: Deep learning is envisioned to play a key role in the design of future wireless receivers. A popular approach to design learning-aided receivers combines deep neural networks (DNNs) with traditional model-based receiver algorithms, realizing hybrid model-based data-driven architectures. Such architectures typically include multiple modules, each carrying out a different functionality dictated by the model-based receiver workflow. Conventionally trained DNN-based modules are known to produce poorly calibrated, typically overconfident, decisions. Consequently, incorrect decisions may propagate through the architecture without any indication of their insufficient accuracy. To address this problem, we present a novel combination of Bayesian deep learning with hybrid model-based data-driven architectures for wireless receiver design. The proposed methodology, referred to as modular Bayesian deep learning, is designed to yield calibrated modules, which in turn improves both accuracy and calibration of the overall receiver. We specialize this approach for two fundamental tasks in multiple-input multiple-output (MIMO) receivers - equalization and decoding. In the presence of scarce data, the ability of modular Bayesian deep learning to produce reliable uncertainty measures is consistently shown to directly translate into improved performance of the overall MIMO receiver chain.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. T. Raviv, S. Park, O. Simeone, and N. Shlezinger, “Modular model-based Bayesian learning for uncertainty-aware and reliable deep MIMO receivers,” in Proc. IEEE ICC, 2023.
  2. D. Gündüz, P. de Kerret, N. D. Sidiropoulos, D. Gesbert, C. R. Murthy, and M. van der Schaar, “Machine learning in the air,” IEEE J. Sel. Areas Commun., vol. 37, no. 10, pp. 2184–2199, 2019.
  3. O. Simeone, “A very brief introduction to machine learning with applications to communication systems,” IEEE Trans. on Cogn. Commun. Netw., vol. 4, no. 4, pp. 648–664, 2018.
  4. A. Balatsoukas-Stimming and C. Studer, “Deep unfolding for communications systems: A survey and some new directions,” arXiv preprint arXiv:1906.05774, 2019.
  5. N. Farsad, N. Shlezinger, A. J. Goldsmith, and Y. C. Eldar, “Data-driven symbol detection via model-based machine learning,” arXiv preprint arXiv:2002.07806, 2020.
  6. Q. Mao, F. Hu, and Q. Hao, “Deep learning for intelligent wireless networks: A comprehensive survey,” IEEE Commun. Surveys Tuts., vol. 20, no. 4, pp. 2595–2621, 2018.
  7. L. Dai, R. Jiao, F. Adachi, H. V. Poor, and L. Hanzo, “Deep learning for wireless communications: An emerging interdisciplinary paradigm,” IEEE Wireless Commun., vol. 27, no. 4, pp. 133–139, 2020.
  8. W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” IEEE Network, vol. 34, no. 3, pp. 134–142, 2019.
  9. T. Raviv, S. Park, O. Simeone, Y. C. Eldar, and N. Shlezinger, “Adaptive and flexible model-based AI for deep receivers in dynamic channels,” IEEE Wireless Commun., early access, 2023.
  10. L. Chen, S. T. Jose, I. Nikoloska, S. Park, T. Chen, and O. Simeone, “Learning with limited samples: Meta-learning and applications to communication systems,” Foundations and Trends® in Signal Processing, vol. 17, no. 2, pp. 79–208, 2023.
  11. S. T. Jose and O. Simeone, “Free energy minimization: A unified framework for modelling, inference, learning, and optimization,” IEEE Signal Process. Mag., vol. 38, no. 2, pp. 120–125, 2021.
  12. C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” in International Conference on Machine Learning.   PMLR, 2017, pp. 1321–1330.
  13. N. Shlezinger, J. Whang, Y. C. Eldar, and A. G. Dimakis, “Model-based deep learning,” Proc. IEEE, vol. 111, no. 5, pp. 465–499, 2023.
  14. N. Shlezinger, Y. C. Eldar, and S. P. Boyd, “Model-based deep learning: On the intersection of deep learning and optimization,” IEEE Access, vol. 10, pp. 115 384–115 398, 2022.
  15. N. Shlezinger and Y. C. Eldar, “Model-based deep learning,” Foundations and Trends® in Signal Processing, vol. 17, no. 4, pp. 291–416, 2023.
  16. V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” IEEE Signal Process. Mag., vol. 38, no. 2, pp. 18–44, 2021.
  17. N. Shlezinger and T. Routtenberg, “Discriminative and generative learning for the linear estimation of random signals [lecture notes],” IEEE Signal Process. Mag., vol. 40, no. 6, pp. 75–82, 2023.
  18. T. Raviv, S. Park, O. Simeone, Y. C. Eldar, and N. Shlezinger, “Online meta-learning for hybrid model-based deep receivers,” IEEE Trans. Wireless Commun., vol. 22, no. 10, pp. 6415–6431, 2023.
  19. N. Shlezinger, N. Farsad, Y. C. Eldar, and A. J. Goldsmith, “Data-driven factor graphs for deep symbol detection,” in Proc. IEEE ISIT, 2020, pp. 2682–2687.
  20. N. Shlezinger, R. Fu, and Y. C. Eldar, “DeepSIC: Deep soft interference cancellation for multiuser MIMO detection,” IEEE Trans. Wireless Commun., vol. 20, no. 2, pp. 1349–1362, 2021.
  21. N. Shlezinger, N. Farsad, Y. C. Eldar, and A. Goldsmith, “Viterbinet: A deep learning based Viterbi algorithm for symbol detection,” IEEE Trans. Wireless Commun., vol. 19, no. 5, pp. 3319–3331, 2020.
  22. T. Raviv, N. Raviv, and Y. Be’ery, “Data-driven ensembles for deep and hard-decision hybrid decoding,” in Proc. IEEE ISIT, 2020, pp. 321–326.
  23. T. Van Luong, N. Shlezinger, C. Xu, T. M. Hoang, Y. C. Eldar, and L. Hanzo, “Deep learning based successive interference cancellation for the non-orthogonal downlink,” IEEE Trans. Veh. Technol., vol. 71, no. 11, pp. 11 876–11 888, 2022.
  24. P. Jiang, T. Wang, B. Han, X. Gao, J. Zhang, C.-K. Wen, S. Jin, and G. Y. Li, “AI-aided online adaptive OFDM receiver: Design and experimental results,” IEEE Trans. Wireless Commun., vol. 20, no. 11, pp. 7655–7668, 2021.
  25. L. V. Jospin, H. Laga, F. Boussaid, W. Buntine, and M. Bennamoun, “Hands-on Bayesian neural networks—a tutorial for deep learning users,” IEEE Comput. Intell. Mag., vol. 17, no. 2, pp. 29–48, 2022.
  26. D. T. Chang, “Bayesian neural networks: Essentials,” arXiv preprint arXiv:2106.13594, 2021.
  27. H. Wang and D.-Y. Yeung, “A survey on Bayesian deep learning,” ACM Computing Surveys (CSUR), vol. 53, no. 5, pp. 1–37, 2020.
  28. M. Zecchin, S. Park, O. Simeone, M. Kountouris, and D. Gesbert, “Robust Bayesian learning for reliable wireless AI: Framework and applications,” IEEE Trans. on Cogn. Commun. Netw., vol. 9, no. 4, pp. 897–912, 2023.
  29. K. M. Cohen, S. Park, O. Simeone, and S. Shamai, “Bayesian active meta-learning for reliable and efficient AI-based demodulation,” IEEE Trans. Signal Process., vol. 70, pp. 5366–5380, 2022.
  30. E. Nachmani, E. Marciano, L. Lugosch, W. J. Gross, D. Burshtein, and Y. Be’ery, “Deep learning methods for improved decoding of linear codes,” IEEE J. Sel. Topics Signal Process., vol. 12, no. 1, pp. 119–131, 2018.
  31. E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear codes using deep learning,” in Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2016.
  32. W.-J. Choi, K.-W. Cheong, and J. M. Cioffi, “Iterative soft interference cancellation for multiple antenna systems,” in Proc. IEEE WCNC, 2000.
  33. Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: Representing model uncertainty in deep learning,” in International Conference on Machine Learning.   PMLR, 2016, pp. 1050–1059.
  34. S. Boluki, R. Ardywibowo, S. Z. Dadaneh, M. Zhou, and X. Qian, “Learnable Bernoulli dropout for Bayesian deep learning,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2020, pp. 3905–3916.
  35. L. Liu, C. Oestges, J. Poutanen, K. Haneda, P. Vainikainen, F. Quitin, F. Tufvesson, and P. De Doncker, “The COST 2100 MIMO channel model,” IEEE Wireless Commun., vol. 19, no. 6, pp. 92–99, 2012.
  36. M. P. Naeini, G. Cooper, and M. Hauskrecht, “Obtaining well calibrated probabilities using bayesian binning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29, no. 1, 2015.
  37. P. Becker, H. Pandya, G. Gebhardt, C. Zhao, C. J. Taylor, and G. Neumann, “Recurrent Kalman networks: Factorized inference in high-dimensional deep feature spaces,” in International Conference on Machine Learning, 2019, pp. 544–552.
  38. G. Revach, N. Shlezinger, X. Ni, A. L. Escoriza, R. J. Van Sloun, and Y. C. Eldar, “KalmanNet: Neural network aided Kalman filtering for partially known dynamics,” IEEE Trans. Signal Process., vol. 70, pp. 1532–1547, 2022.
  39. D. H. Shmuel, J. P. Merkofer, G. Revach, R. J. van Sloun, and N. Shlezinger, “SubspaceNet: Deep learning-aided subspace methods for DoA estimation,” arXiv preprint arXiv:2306.02271, 2023.
  40. J. Knoblauch, J. Jewson, and T. Damoulas, “Generalized variational inference: Three arguments for deriving new posteriors,” arXiv preprint arXiv:1904.02063, 2019.
  41. A. Masegosa, “Learning under model misspecification: Applications to variational and ensemble methods,” Advances in Neural Information Processing Systems, vol. 33, pp. 5479–5491, 2020.
  42. W. R. Morningstar, A. Alemi, and J. V. Dillon, “Pacm-bayes: Narrowing the empirical risk gap in the misspecified bayesian regime,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2022, pp. 8270–8298.
  43. A. N. Angelopoulos and S. Bates, “A gentle introduction to conformal prediction and distribution-free uncertainty quantification,” arXiv preprint arXiv:2107.07511, 2021.
  44. K. M. Cohen, S. Park, O. Simeone, and S. Shamai, “Calibrating AI models for wireless communications via conformal prediction,” IEEE Trans. Mach. Learn. Commun. Netw., vol. 1, pp. 296–312, 2023.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com