Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 85 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 37 tok/s
GPT-5 High 37 tok/s Pro
GPT-4o 100 tok/s
GPT OSS 120B 473 tok/s Pro
Kimi K2 240 tok/s Pro
2000 character limit reached

MU-MIMO Symbol-Level Precoding for QAM Constellations with Maximum Likelihood Receivers (2410.22028v1)

Published 29 Oct 2024 in eess.SP

Abstract: In this paper, we investigate symbol-level precoding (SLP) and efficient decoding techniques for downlink transmission, where we focus on scenarios where the base station (BS) transmits multiple QAM constellation streams to users equipped with multiple receive antennas. We begin by formulating a joint symbol-level transmit precoding and receive combining optimization problem. This coupled problem is addressed by employing the alternating optimization (AO) method, and closed-form solutions are derived by analyzing the obtained two subproblems. Furthermore, to address the dependence of the receive combining matrix on the transmit signals, we switch to maximum likelihood detection (MLD) method for decoding. Notably, we have demonstrated that the smallest singular value of the precoding matrix significantly impacts the performance of MLD method. Specifically, a lower value of the smallest singular value results in degraded detection performance. Additionally, we show that the traditional SLP matrix is rank-one, making it infeasible to directly apply MLD at the receiver end. To circumvent this limitation, we propose a novel symbol-level smallest singular value maximization problem, termed SSVMP, to enable SLP in systems where users employ the MLD decoding approach. Moreover, to reduce the number of variables to be optimized, we further derive a more generic semidefinite programming (SDP)-based optimization problem. Numerical results validate the effectiveness of our proposed schemes and demonstrate that they significantly outperform the traditional block diagonalization (BD)-based method.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube