Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exact and Approximate High-Multiplicity Scheduling on Identical Machines (2404.17274v2)

Published 26 Apr 2024 in cs.DS and cs.CC

Abstract: Goemans and Rothvoss (SODA'14) gave a framework for solving problems which can be described as finding a point in int$.$cone$(P\cap\mathbb{Z}N)\cap Q$, where $P,Q\subset\mathbb{R}N$ are (bounded) polyhedra. The running time for solving such a problem is $enc(P){2{O(N)}}enc(Q){O(1)}$. This framework can be used to solve various scheduling problems, but the encoding length $enc(P)$ usually involves large parameters like the makespan. We describe three tools to improve the framework: - Problem-specific preprocessing can be used to greatly reduce $enc(P)$. - By solving a certain LP relaxation and then using the classical result by Frank and Tardos (J. Comb. '87), we get a more compact encoding of $P$ in general. - A result by Jansen and Klein (SODA'17) makes the running time depend on the number of vertices of the integer hull of $P$. We provide a new bound for this number that is similar to the one by Berndt et al. (SOSA'21) but better for our setting. For example, applied to the scheduling problem $P||C_{\max}$, these tools improve the running time from $(\log(C_{\max})){2{O(d)}}enc(I){O(1)}$ to the possibly much better $(\log(p_{\max})){2{O(d)}}enc(I){O(1)}$. Here, $p_{\max}$ is the largest processing time, $d$ is the number of different processing times, $C_{\max}$ is the makespan and $enc(I)$ is the encoding length of the instance. On the complexity side, we use reductions from the literature to provide new parameterized lower bounds for $P||C_{\max}$. Finally, we show that the big open question asked by Mnich and van Bevern (Comput. Oper. Res. '18) whether $P||C_{\max}$ is FPT w.r.t. the number of job types $d$ has the same answer as the question whether $Q||C_{\max}$ is FPT w.r.t. the number of job and machine types $d+\tau$ (all in high-multiplicity encoding). The same holds for objective $C_{\min}$.

Citations (1)

Summary

We haven't generated a summary for this paper yet.