Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cognitive Radio Transmission Strategies for Primary Markovian Channels (1211.5720v1)

Published 25 Nov 2012 in cs.NI

Abstract: A fundamental problem in cognitive radio systems is that the cognitive radio is ignorant of the primary channel state and, hence, of the amount of actual harm it inflicts on the primary license holder. Sensing the primary transmitter does not help in this regard. To tackle this issue, we assume in this paper that the cognitive user can eavesdrop on the ACK/NACK Automatic Repeat reQuest (ARQ) fed back from the primary receiver to the primary transmitter. Assuming a primary channel state that follows a Markov chain, this feedback gives the cognitive radio an indication of the primary link quality. Based on the ACK/NACK received, we devise optimal transmission strategies for the cognitive radio so as to maximize a weighted sum of primary and secondary throughput. The actual weight used during network operation is determined by the degree of protection afforded to the primary link. We begin by formulating the problem for a channel with a general number of states. We then study a two-state model where we characterize a scheme that spans the boundary of the primary-secondary rate region. Moreover, we study a three-state model where we derive the optimal strategy using dynamic programming. We also extend our two-state model to a two-channel case, where the secondary user can decide to transmit on a particular channel or not to transmit at all. We provide numerical results for our optimal strategies and compare them with simple greedy algorithms for a range of primary channel parameters. Finally, we investigate the case where some of the parameters are unknown and are learned using hidden Markov models (HMM).

Citations (1)

Summary

We haven't generated a summary for this paper yet.