Single-Machine Scheduling Problem
- Single-machine scheduling is the task of sequencing jobs—with constraints such as processing times, release dates, and due dates—to optimize objectives like makespan and total completion time.
- The topic encompasses exact algorithms (e.g., greedy rules and dynamic programming), approximation schemes including PTAS, and ML-assisted heuristics to tackle varied computational complexities.
- It extends to practical applications and advanced scenarios like sequence-dependent setups, robust scheduling under uncertainty, and dynamic job attributes in manufacturing and service systems.
A single-machine scheduling problem comprises the sequencing of a finite set of jobs, each with associated processing requirements and timing constraints, on a single processing unit (machine) so as to optimize a specified objective function. Typically, each job is characterized by at least one of the following parameters: processing time , release date , due date , delivery time , weight , and possibly additional attributes such as setup times, deterioration, resource consumption, or precedence constraints. The problem is foundational in the theory and practice of combinatorial optimization and underpins a diverse range of application-driven scenarios in manufacturing, computing, and service operations. Research in this area addresses both exact and approximate methods, variants with additional side-constraints, the role of uncertainty and robustness, and the algorithmic and structural complexity landscape.
1. Formal Definitions and Key Variants
A canonical single-machine scheduling problem is represented in three-field notation as , where the first field indicates a single machine, the second field denotes additional structural or resource constraints, and the third field specifies the objective function. Examples of objectives include minimizing makespan , total completion time , maximum lateness , total weighted tardiness , and minimizing the (weighted) number of tardy jobs .
Classical special cases and extensions include:
- Release dates: — each job cannot start before .
- Sequence-dependent setup times: Additional cost incurred depending on the job sequence, e.g., to switch from job to (Leib et al., 28 Jul 2025).
- Non-renewable resource constraints: Each job consumes resources that are replenished at fixed points (Bérczi et al., 2019, Hashimoto et al., 2021).
- Job-dependent machine deterioration or maintenance: Machine capacity degrades with use, periodically requiring maintenance (Luo et al., 2016).
- Robust or recoverable robust scheduling: Schedules are constructed to hedge against uncertainty in processing times, release times, or other data (Umang et al., 2014, Bold et al., 2020, Bold et al., 2021).
- Multiobjective optimization: Simultaneously optimizing multiple conflicting criteria, such as cost and tardiness (zhou, 2013).
2. Structural Properties and Complexity
The complexity status and available solution techniques for single-machine scheduling vary sharply with the introduced constraints and objective function.
- The simplest problems (e.g., ) are polynomially solvable via sequencing jobs by non-decreasing processing times (SPT rule).
- is solvable in time by the Earliest Due Date (EDD) rule when all release dates are equal (Vakhania et al., 29 May 2024).
- The presence of release dates or sequence-dependent setup times makes the problem strongly NP-hard (e.g., ) (Vakhania et al., 29 May 2024, Leib et al., 28 Jul 2025). The general and (with tails) are archetypal strongly NP-hard problems.
- Minimizing the weighted number of tardy jobs ( or ) is NP-hard even for highly restricted settings (e.g., one due date), via a reduction from Knapsack (Hermelin et al., 2017, Kaul et al., 23 Aug 2024).
Tables of complexity for various scheduling objectives and common constraints can be found in surveys (Vakhania et al., 29 May 2024).
3. Algorithmic Techniques and Approximation Schemes
A wide algorithmic spectrum is exhibited in the literature:
- Exact Algorithms: For polynomially solvable cases (e.g., EDD and SPT settings), greedy rules and dynamic programming dominate. Specialized dynamic programming schemes exist under parameterizations such as bounded numbers of due dates, release dates, or weights (Hermelin et al., 2017, Kaul et al., 23 Aug 2024). For fixed-parameter tractability, reductions to mixed integer linear programming (MILP) and the use of algorithms of Dadush et al. or Lenstra’s method have enabled efficient solution when certain parameter combinations (e.g., few processing times, few weights) are small (Hermelin et al., 2017, Kaul et al., 23 Aug 2024).
- Approximation Algorithms:
- Polynomial-Time Approximation Schemes (PTAS): Hybrid evolutionary algorithms yielding PTAS for maximum lateness () (Mitavskiy et al., 2012); graphical and variable-state-space reduction methods for strongly NP-hard objectives (Vakhania et al., 29 May 2024).
- Constant-Factor Approximations: E.g., $4$-approximation for min-sum objectives with non-decreasing cost functions using primal-dual methods and knapsack-cover inequalities (Cheung et al., 2016), $3/2$-approximation and (4+)-approximation for scheduling with non-renewable resources (Bérczi et al., 2019).
- Greedy and Heuristic Approaches: For multiobjective and Pareto optimization, state transition algorithms with swap/shift/symmetry transformations, combined with non-dominated sorting and Pareto-archived sets (zhou, 2013).
- Data-Driven and ML-Assisted Heuristics: Deep learning regressors and classification models are now integrated with decomposition heuristics (e.g., Lawler’s and Della Croce’s symmetric decompositions) to estimate subproblem costs or predict job “early/tardy” status, enabling rapid and highly scalable performance on large instances (Bouška et al., 19 Feb 2024, Antonov et al., 19 Aug 2025, Parmentier et al., 2021). Model architectures include LSTM-based predictors and multilayer perceptrons with features crafted to ensure permutation invariance and leverage combinatorial context.
- Robust and Recoverable Frameworks: Robust optimization with interval or polyhedral uncertainty is handled via MILPs for worst-case scenario evaluation and metaheuristics (variable neighborhood search, iterated local search) (Umang et al., 2014). Recoverable-robust formulations allow a bounded number of swaps between the initial and adjusted schedules, translating the adjustment process into matching, assignment, or layered network optimization (Bold et al., 2020, Bold et al., 2021).
4. Parameterized Complexity and Fine-Grained Results
Recent work systematically maps the fixed parameter tractability and hardness landscape using key instance parameters:
Parameter(s) | Tractability | Reference |
---|---|---|
One due date () | NP-hard | (Hermelin et al., 2017) |
Two of (, , ) | FPT (parameterized algs; MILP) | (Hermelin et al., 2017) |
FPT (via MILP; Lenstra) | (Kaul et al., 23 Aug 2024) | |
W[1]-hard (even if constant) | (Kaul et al., 23 Aug 2024) | |
Constant or | XP (pseudo-poly DP) after unary encoding | (Kaul et al., 23 Aug 2024) |
This parameter landscape clarifies that for many highly structured input cases (with limited numerical diversity), even NP-hard objectives may be solved efficiently. However, the presence of moderate parameter diversity (especially in due and release dates) rapidly provokes W[1]-hardness.
5. Multiobjective and Robust Optimization
The demand for multi-objective scheduling led to the design of state transition algorithms that represent candidate solutions as vectors, apply swap/shift/symmetry operators, and select via non-dominated sorting and Pareto archives. Evaluation functions may incorporate arbitrary objectives, and experimental results demonstrate improved performance and solution diversity over standard enumeration or heuristic methods (zhou, 2013). Robust scheduling, objective in minimizing the worst-case cost under uncertainty, leverages both combinatorial properties (e.g., short sets covering extreme scenarios) and metaheuristics, showing that for nonzero release times, strongly NP-hard behavior persists (Umang et al., 2014). In recoverable robust problems, allowing a bounded number of schedule edits after uncertain parameters are resolved aligns with incremental matching and assignment optimization, yielding polynomially-sized compact formulations for practical computation (Bold et al., 2020).
6. Extensions: Sequence-Dependent Effects, Rescheduling, and Dynamic Job Attributes
Emerging lines of research address increasingly realistic or industry-driven side constraints:
- Sequence-dependent setup times, abstracted as graph path problems: For jobs with process parameters (e.g., temperature, color), optimal sequencing is recast as a shortest path in a layered graph where transitions encode both color block transitions and thermal adjustment costs. The two-color case admits polynomial time solutions via this network transformation, with blockwise and within-block sorted structures characterizing optimal schedules (Leib et al., 28 Jul 2025).
- Rescheduling with disruption constraints: Integrating new jobs into existing schedules, with disruptions measured by the maximum or sum of absolute time deviations, raises intricate trade-offs. Depending on the particular cost (e.g., , total weighted completion) and disruption measure, idle times may or may not be required for optimality. Complexity ranges from polynomially solvable to strongly NP-hard, with fine-grained classification across objectives and criteria (Rener et al., 2023).
- Jobs with dynamic attributes: Models where the state of a job (e.g., temperature due to heating/cooling) evolves during and outside processing, with constraints on allowable states, are now analyzed via continuous-time or fractional schedule representations. For such settings, sum of completion time minimization reduces to a tractable linear program, and makespan minimization admits closed-form optimality (Lambers et al., 2023).
7. Applications and Outlook
Single-machine scheduling theory underpins a spectrum of real-world problems, from batch production with thermal inertia, maintenance scheduling in manufacturing environments experiencing machine wear, to workload allocation on limited resources in service or computing systems. The field is moving rapidly toward hybrid methodologies where combinatorial insights, parameterized algorithms, approximation schemes, and ML-based scheduling oracles are integrated for both theoretical rigor and computational tractability on large-scale, heterogeneous instances (Bouška et al., 19 Feb 2024, Antonov et al., 19 Aug 2025). Open questions persist regarding (for example) PTAS for general min-sum objectives, closing integrality gaps in strengthened LP formulations, robustification for stochastic or partially observed environments, and the design of algorithms matching the practical needs of rescheduling and maintenance-aware policies.
The extensive literature—including recent surveys (Vakhania et al., 29 May 2024)—provides a rigorous framework for continued research, with methodological advances in parameterized and data-driven scheduling promising to shape both theory and algorithmic practice in increasingly complex settings.