Papers
Topics
Authors
Recent
2000 character limit reached

Vertical Scheduling Strategies

Updated 26 December 2025
  • Vertical Scheduling is a method that organizes temporal or spatial resource allocation in a layered, hierarchical manner across diverse domains like urban air mobility, iterative decoding, and cloud-edge workflows.
  • It employs techniques such as MILP, integer programming, and information bottleneck methods to resolve resource conflicts and maximize throughput while adhering to quality-of-service constraints.
  • Practical implementations show significant improvements, including up to 50% reduction in VTOL departure delays, 100% request servicing under peak load, and efficient state tracking in LDPC decoding.

Vertical Scheduling refers to a family of scheduling strategies and models that organize the temporal or spatial allocation of system resources in a vertically structured manner. While the term is used in diverse technical domains, leading exemplars occur in wireless communication (layered LDPC decoding), urban air mobility (takeoff and terminal operations for VTOL/eVTOL vehicles), and containerized workflow management in cloud-edge architectures. Vertical scheduling approaches address resource conflicts, throughput bounds, and quality-of-service objectives through structured decision frameworks, often contrasting with horizontal or flat scheduling baselines.

1. Vertical Scheduling in Urban Air Mobility and VTOL/eVTOL Operations

Vertical scheduling in Urban Air Mobility (UAM) and Vertiport/Vertiminal management is centered on optimizing the time and resource allocation for Electric or general Vertical Takeoff and Landing (eVTOL/VTOL) vehicles over a network of pads, gates, taxiways, and airspace segments.

Vertiport Terminal Scheduling and Throughput Analysis

Saxena et al. (Saxena et al., 2 Aug 2024) address the scheduling of VTOL vehicle operations at vertiport terminals by proposing a Mixed Integer Linear Program (MILP) for holistic optimization of all operational phases: gate holding, taxiing, pad occupation (takeoff/landing), assigned climb/approach surface direction, and turnaround. The problem is structured as follows:

  • Decision Variables
    • tnit_n^i: Timestamp at which VTOL ii reaches node nn (gates, pads, taxiway segments)
    • yijny_{ij}^n: Binary precedence variable for VTOLs i,ji,j at node nn
    • zijz_{ij}: Auxiliary binary for sequencing turnaround flights at gates
  • Objective
    • Minimize a weighted sum of all VTOL delays, partitioned by phase: gate, taxiway, pad, climb, and turnaround.
  • Constraints
    • Enforce conflict-free transitions (gate, taxiway, OFV, pad), time separation (wake-vortex, surface-direction), no overtaking, non-negativity, and logical ordering across shared links.
  • Throughput Capacity
    • The vertiminal's max throughput ρvertiminal\rho_{\text{vertiminal}} is analytically determined as the minimum bottleneck across (i) TLOF pad system, (ii) taxiway network via max-flow/min-cut, and (iii) gate system with turnaround constraints.
  • Case Study Findings
    • MILP yields ≈50% reduction in mean and median departure delays versus FCFS. When flight presence exceeds saturation threshold, scheduled movements reach the theoretical throughput envelope (e.g., 8 movements/min for typical LA-inspired setups).

Throughput Maximizing Takeoff Scheduling

Tavildar et al. (Pooladsanj et al., 21 Mar 2025) formulate the vertical scheduling of eVTOL takeoffs and network movements via the VertiSync policy. The system is modeled as a directed graph of vertiports and pad resources. The VertiSync policy executes as:

  • Collect all outstanding trip requests,
  • Solve an integer program (extension of the Traffic Flow Management Problem) across slotted time and all active vehicles,
  • Enforce all sector, pad, and energy constraints,
  • Allocate takeoffs to maximize system throughput within conflict-free and energy-aware boundaries.

The throughput region is shown to be a convex polytope defined by time-sharing over achievable pad-limited takeoff service vectors. With sufficient fleet size and symmetric network structure, VertiSync is throughput-optimal. Empirically, VertiSync maintains bounded queueing delays under peak load, outperforming simple greedy baselines by servicing a full 100%100\% of requests at critical demand, versus 80%\approx80\% for FCFS.

2. Vertical Layered Scheduling in Iterative Decoding

Mohr & Bauch (Mohr et al., 2022) analyze vertical layered scheduling in quasi-cyclic LDPC decoding by revisiting the per-iteration update order and hardware implications.

  • Vertical (Column-Layered) LDPC Scheduling
    • Variable nodes partitioned into dcd_c layers, each corresponding to a base-matrix column.
    • Each iteration consists of dcd_c sequential partial check-node (CN) updates for each column-layer, followed by full variable-node (VN) updates on corresponding variables.
    • Contrasts with horizontal scheduling (row-layered), where CN updates proceed by base-matrix rows and VNs are updated partially.
  • Algorithmic Mechanics
    • Partial CN updates use high-precision calculations mapped to quantized ww-bit messages, with recursive updates enabling hardware-efficient state tracking.
    • Full VN updates operate on entire variable layers.
    • Mutual-information–maximizing compression (MIM) quantizes updates at both CN and VN boundaries via offline information bottleneck design.
  • Complexity and Performance Trade-offs
    • MIM-vertical (MIM-V) achieves nearly identical BER to MIM-horizontal (MIM-H), both surpassing classic offset-min-sum at equal bit width.
    • MIM-V incurs a ≈14% average increase in iteration count compared to MIM-H at equivalent Eb/N0E_b/N_0; 2-bit MIM-V increases iteration count by ≈40%.
    • Most hardware cost is incurred by routing network barrel shifters, not node updates.
    • Memory cost per edge for vertical is ≈2.11 bits, exceeding the 2 bits for horizontal due to tracking partial CN state.
  • Hardware Architecture
    • Both 2D processor graph embedding and 3D stacking are detailed: straight-line wires and local barrel shifter logic reduce crossbar congestion.
    • Routing dominates silicon area; uniform quantization cuts logic by half with negligible BER penalty.
    • Vertical places memory for state with CNs, horizontal with VNs.

3. Vertical Offloading in Cloud–Edge Workflow Scheduling

Vertical scheduling in cloud-edge computational workflows refers to the dynamic assignment of tasks to either edge or cloud environments, with runtime offloading contingent on local resource exhaustion (Shan et al., 2 Jan 2024).

  • Model Formulation
    • Each task si,js_{i,j} assigned via binary selector variables αsi,je\alpha^e_{s_{i,j}} (edge) and αsi,jc\alpha^c_{s_{i,j}} (cloud) with αsi,je+αsi,jc=1\alpha^e_{s_{i,j}} + \alpha^c_{s_{i,j}} = 1.
    • Delay constraint ensures that assignment plus execution and transfer time does not exceed task deadline.
    • Objectives include optimized global resource utilization bounded by CPU/RAM quotas.
  • Vertical Offloading Algorithm
    • When a Pod is killed by OOM, the informer triggers offloading: the scheduler selects a suitable cloud node based on maximum residual resources and updates the task's mapping.
    • The process involves label and image address management, and state synchronization with Redis.
  • Performance Metrics and Observations
    • Offloading rate scales with system load: at N=10N=10 concurrent workflows, up to 19% of tasks are offloaded.
    • Each vertical offloading event incurs a lifecycle penalty of $38$–$50$ s.
    • Offloading is reactive (OOMKilled), not predictive or preventative.
    • Edge nodes are preserved for latency-sensitive workloads, while cloud nodes absorb overflow.
  • Trade-offs
    • Increased system complexity (dual image registries, metadata management).
    • Offloading latency may impact deadline-sensitivity for some workflow types.
    • System limits are tested under high frequency of OOMKilled events; excessive load can saturate metadata handling infrastructure.

4. Comparative Overview of Vertical Scheduling Variants

Domain Vertical Schedule Type Objective Trade-off
UAM/Vertiports Takeoff/terminal sequencing Throughput, queue stability, safety MILP size, enforcement of constraints
LDPC Decoding Layered update (column) Error rate, complexity, quantization More iterations, more state per edge
Cloud-Edge Orchestration Task offloading (run-time) Fault-tolerance, utilization, deadlines Offloading latency, complexity

Each instantiation of vertical scheduling is domain-specific in constraints, objectives, and hardware or system impact but shares the common principle of strict temporal or spatial ordering along a vertical (hierarchical or layer-wise) axis.

5. Architectural and Analytical Foundations

  • Analytical Frameworks
    • MILP modeling underpins scheduling for urban air mobility (Saxena et al., 2 Aug 2024), enabling explicit handle on complex delay, capacity, and safety relations.
    • Integer programming with renewal-reward and Foster–Lyapunov arguments rigorously delineate throughput regions in cyclic eVTOL scheduling (Pooladsanj et al., 21 Mar 2025).
    • Information bottleneck and density evolution methods are foundational in mutual-information–maximizing layered LDPC schedules (Mohr et al., 2022).
    • Cloud–edge scheduling leverages binary decision variables, scheduling engine logic (e.g., Algorithm 5), and resource-constraint enforcement for vertical task placement (Shan et al., 2 Jan 2024).
  • Key Parameters
    • Pad occupancy, wake-vortex separation, path travel and taxi times, turnaround for UAM.
    • Bit-width, partial node update state, routing logic in LDPC decoding.
    • Compute/data transfer times, minimum resource thresholds, and failure detection for offloading in cloud-edge collaboration.

6. Generalizations, Scalability, and Limitations

  • Vertiport/vertiminal models are topology-agnostic and extendible to stochastic arrivals via rolling or receding-horizon MILPs (Saxena et al., 2 Aug 2024).
  • VertiSync’s throughput-optimality holds under symmetry and adequate fleet, but less so when network asymmetries or tight fleet supply dominate (Pooladsanj et al., 21 Mar 2025).
  • Cloud-edge offloading in KCES is currently reactive; proactive or predictive resource management remains a challenge (Shan et al., 2 Jan 2024).
  • Vertical LDPC schedules yield efficient hardware, but resource-constrained scenarios may favor horizontal layering due to fewer required iterations (Mohr et al., 2022).

A plausible implication is that, across domains, vertical scheduling frameworks that explicitly model both resource and sequencing constraints admit rigorous throughput analysis and practical capacity certification—enabling provable guarantees under domain-tailored architectures and operational regimes.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Vertical Scheduling.