Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Symbol-Level Precoding Made Practical for Multi-Level Modulations via Block-Level Rescaling (2006.15245v2)

Published 27 Jun 2020 in cs.IT, eess.SP, and math.IT

Abstract: In this paper, we propose an interference exploitation symbol-level precoding (SLP) method for multi-level modulations via an in-block power allocation scheme to greatly reduce the signaling overhead. Existing SLP approaches require the symbol-level broadcast of the rescaling factor to the users for correct demodulation, which hinders the practical implementation of SLP. The proposed approach allows a block-level broadcast of the rescaling factor as done in traditional block-level precoding, greatly reducing the signaling overhead for SLP without sacrificing the performance. Our derivations further show that the proposed in-block power allocation enjoys an exact closed-form solution and thus does not increase the complexity at the base station (BS). In addition to the significant alleviation of the signaling overhead validated by the effective throughput result, numerical results demonstrate that the proposed power allocation approach also improves the error-rate performance of the existing SLP. Accordingly, the proposed approach enables the practical use of SLP in multi-level modulations.

Citations (10)

Summary

We haven't generated a summary for this paper yet.