Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Strongly Tail-Optimal Scheduling in the Light-Tailed M/G/1 (2404.08826v2)

Published 12 Apr 2024 in cs.PF and math.PR

Abstract: We study the problem of scheduling jobs in a queueing system, specifically an M/G/1 with light-tailed job sizes, to asymptotically optimize the response time tail. This means scheduling to make $\mathbf{P}[T > t]$, the chance a job's response time exceeds $t$, decay as quickly as possible in the $t \to \infty$ limit. For some time, the best known policy was First-Come First-Served (FCFS), which has an asymptotically exponential tail: $\mathbf{P}[T > t] \sim C e{-\gamma t}$. FCFS achieves the optimal decay rate $\gamma$, but its tail constant $C$ is suboptimal. Only recently have policies that improve upon FCFS's tail constant been discovered. But it is unknown what the optimal tail constant is, let alone what policy might achieve it. In this paper, we derive a closed-form expression for the optimal tail constant $C$, and we introduce $\gamma$-Boost, a new policy that achieves this optimal tail constant. Roughly speaking, $\gamma$-Boost operates similarly to FCFS, but it pretends that small jobs arrive earlier than their true arrival times. This significantly reduces the response time of small jobs without unduly delaying large jobs, improving upon FCFS's tail constant by up to 50% with only moderate job size variability, with even larger improvements for higher variability. While these results are for systems with full job size information, we also introduce and analyze a version of $\gamma$-Boost that works in settings with partial job size information, showing it too achieves significant gains over FCFS. Finally, we show via simulation that $\gamma$-Boost has excellent practical performance.

Citations (2)

Summary

We haven't generated a summary for this paper yet.