Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diffusion-based Generative Prior for Low-Complexity MIMO Channel Estimation (2403.03545v1)

Published 6 Mar 2024 in eess.SP and cs.LG

Abstract: This work proposes a novel channel estimator based on diffusion models (DMs), one of the currently top-rated generative models. Contrary to related works utilizing generative priors, a lightweight convolutional neural network (CNN) with positional embedding of the signal-to-noise ratio (SNR) information is designed by learning the channel distribution in the sparse angular domain. Combined with an estimation strategy that avoids stochastic resampling and truncates reverse diffusion steps that account for lower SNR than the given pilot observation, the resulting DM estimator has both low complexity and memory overhead. Numerical results exhibit better performance than state-of-the-art channel estimators utilizing generative priors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. M. Koller, B. Fesl, N. Turan, and W. Utschick, “An asymptotically MSE-optimal estimator based on Gaussian mixture models,” IEEE Trans. Signal Process., vol. 70, pp. 4109–4123, 2022.
  2. B. Fesl, N. Turan, and W. Utschick, “Low-rank structured MMSE channel estimation with mixtures of factor analyzers,” in 57th Asilomar Conf. Signals, Syst., Comput., 2023, to be published.
  3. E. Balevi, A. Doshi, A. Jalal, A. Dimakis, and J. G. Andrews, “High dimensional channel estimation using deep generative networks,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 18–30, 2021.
  4. M. Baur, B. Fesl, and W. Utschick, “Leveraging variational autoencoders for parameterized MMSE channel estimation,” 2023, arXiv preprint: 2307.05352.
  5. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” in Proc. 34th Int. Conf. Neural Inf. Process. Syst., 2020, p. 6840–6851.
  6. Y. Song and S. Ermon, “Generative modeling by estimating gradients of the data distribution,” in Adv. Neural Inf. Process. Syst., vol. 32, 2019.
  7. M. Kim, R. Fritschek, and R. F. Schaefer, “Learning end-to-end channel coding with diffusion models,” in WSA & SCC; 26th Int. ITG Workshop Smart Antennas and 13th Conf. Syst., Commun., Coding, 2023, pp. 1–6.
  8. T. Wu, Z. Chen, D. He, L. Qian, Y. Xu, M. Tao, and W. Zhang, “CCDM: Channel denoising diffusion models for wireless communications,” 2023, arXiv preprint: 2305.09161.
  9. M. Arvinte and J. I. Tamir, “MIMO channel estimation using score-based generative models,” IEEE Trans. Wireless Commun., vol. 22, no. 6, pp. 3698–3713, 2023.
  10. B. Fesl, B. Böck, F. Strasser, M. Baur, M. Joham, and W. Utschick, “On the asymptotic mean square error optimality of diffusion probabilistic models,” 2024, arXiv preprint: 2403.02957.
  11. J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in Proc. 32nd Int. Conf. Mach. Learn., vol. 37, 2015, pp. 2256–2265.
  12. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Adv. Neural Inf. Process. Syst., vol. 30, 2017.
  13. 3GPP, “Spatial channel model for multiple input multiple output (MIMO) simulations,” 3rd Generation Partnership Project (3GPP), Tech. Rep. 25.996 (V16.0.0), Jul. 2020.
  14. D. Neumann, T. Wiese, and W. Utschick, “Learning the MMSE channel estimator,” IEEE Trans. Signal Process., vol. 66, no. 11, pp. 2905–2917, 2018.
  15. S. Jaeckel, L. Raschkowski, K. Börner, and L. Thiele, “QuaDRiGa: A 3-D multi-cell channel model with time evolution for enabling virtual field trials,” IEEE Trans. Antennas Propag., vol. 62, no. 6, pp. 3242–3256, 2014.
  16. B. Fesl, M. Joham, S. Hu, M. Koller, N. Turan, and W. Utschick, “Channel estimation based on Gaussian mixture models with structured covariances,” in 56th Asilomar Conf. Signals, Syst., Comput., 2022, pp. 533–537.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets