Jointly approximating optimal mean and optimal tail constant

Construct, for the M/G/1 with light‑tailed job sizes, a scheduling policy that for any ε1, ε2 > 0 achieves mean response time within ε1 of that of Shortest Remaining Processing Time (SRPT) while simultaneously achieving a response‑time tail constant within ε2 of that of Boost_γ; equivalently, characterize and achieve the Pareto frontier of achievable mean response time versus tail constant.

Background

SRPT minimizes mean response time but performs poorly on the response‑time tail for light‑tailed job sizes, while Boost_γ minimizes the asymptotic tail constant but is not designed to optimize the mean.

The authors explicitly conjecture the possibility of designing a single policy that approximates both extremes arbitrarily closely, motivating a characterization of the Pareto frontier between mean response time and tail constant.

References

Because the tail constant is a purely asymptotic notion, we conjecture that, at least theoretically, it is possible to design a policy with mean response time arbitrarily close to SRPT's and tail constant arbitrarily close to \boost{\gamma}'s.

Strongly Tail-Optimal Scheduling in the Light-Tailed M/G/1 (2404.08826 - Yu et al., 12 Apr 2024) in Conclusion, Section “Metrics beyond the tail constant”