Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Variable-Length Sparse Feedback Codes for Point-to-Point, Multiple Access, and Random Access Channels (2103.09373v3)

Published 17 Mar 2021 in cs.IT and math.IT

Abstract: This paper investigates variable-length stop-feedback codes for memoryless channels in point-to-point, multiple access, and random access communication scenarios. The proposed codes employ $L$ decoding times $n_1, n_2, \dots, n_L$ for the point-to-point and multiple access channels and $KL + 1$ decoding times for the random access channel with at most $K$ active transmitters. In the point-to-point and multiple access channels, the decoder uses the observed channel outputs to decide whether to decode at each of the allowed decoding times $n_1, \dots, n_L$, at each time telling the encoder whether or not to stop transmitting using a single bit of feedback. In the random access scenario, the decoder estimates the number of active transmitters at time $n_0$ and then chooses among decoding times $n_{k, 1}, \dots, n_{k, L}$ if it believes that there are $k$ active transmitters. In all cases, the choice of allowed decoding times is part of the code design; given fixed value $L$, allowed decoding times are chosen to minimize the expected decoding time for a given codebook size and target average error probability. The number $L$ in each scenario is assumed to be constant even when the blocklength is allowed to grow; the resulting code therefore requires only sparse feedback. The central results are asymptotic approximations of achievable rates as a function of the error probability, the expected decoding time, and the number of decoding times. A converse for variable-length stop-feedback codes with uniformly-spaced decoding times is included for the point-to-point channel.

Citations (6)

Summary

We haven't generated a summary for this paper yet.