Papers
Topics
Authors
Recent
2000 character limit reached

Nonlinear Sparse Bayesian Learning Methods with Application to Massive MIMO Channel Estimation with Hardware Impairments (2506.03775v1)

Published 4 Jun 2025 in eess.SP

Abstract: Accurate channel estimation is critical for realizing the performance gains of massive multiple-input multiple-output (MIMO) systems. Traditional approaches to channel estimation typically assume ideal receiver hardware and linear signal models. However, practical receivers suffer from impairments such as nonlinearities in the low-noise amplifiers and quantization errors, which invalidate standard model assumptions and degrade the estimation accuracy. In this work, we propose a nonlinear channel estimation framework that models the distortion function arising from hardware impairments using Gaussian process (GP) regression while leveraging the inherent sparsity of massive MIMO channels. First, we form a GP-based surrogate of the distortion function, employing pseudo-inputs to reduce the computational complexity. Then, we integrate the GP-based surrogate of the distortion function into newly developed enhanced sparse Bayesian learning (SBL) methods, enabling distortion-aware sparse channel estimation. Specifically, we propose two nonlinear SBL methods based on distinct optimization objectives, each offering a different trade-off between estimation accuracy and computational complexity. Numerical results demonstrate significant gains over the Bussgang linear minimum mean squared error estimator and linear SBL, particularly under strong distortion and at high signal-to-noise ratio.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.