Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting the Past to Reduce Delay in CSMA Scheduling: A High-order Markov Chain Approach (1302.3250v4)

Published 13 Feb 2013 in cs.NI and cs.PF

Abstract: Recently several CSMA algorithms based on the Glauber dynamics model have been proposed for multihop wireless scheduling, as viable solutions to achieve the throughput optimality, yet are simple to implement. However, their delay performances still remain unsatisfactory, mainly due to the nature of the underlying Markov chains that imposes a fundamental constraint on how the link state can evolve over time. In this paper, we propose a new approach toward better queueing and delay performance, based on our observation that the algorithm needs not be Markovian, as long as it can be implemented in a distributed manner, achieve the same throughput optimality, while offering far better delay performance for general network topologies. Our approach hinges upon utilizing past state information observed by local link and then constructing a high-order Markov chain for the evolution of the feasible link schedules. We show in theory and simulation that our proposed algorithm, named delayed CSMA, adds virtually no additional overhead onto the existing CSMA-based algorithms, achieves the throughput optimality under the usual choice of link weight as a function of local queue length, and also provides much better delay performance by effectively `de-correlating' the link state process (thus removing link starvation) under any arbitrary network topology. From our extensive simulations we observe that the delay under our algorithm can be often reduced by a factor of 20 over a wide range of scenarios, compared to the standard Glauber-dynamics-based CSMA algorithm.

Citations (10)

Summary

We haven't generated a summary for this paper yet.