Papers
Topics
Authors
Recent
2000 character limit reached

Dynamic Temporal Granularity Scheduler

Updated 29 December 2025
  • Dynamic Temporal Granularity Scheduler is a method for adaptive temporal resource management that partitions system time to optimize utilization and fairness.
  • It employs auction-based, negotiation, and information-theoretic protocols with algorithms like Weighted Interval Scheduling to efficiently allocate GPU, spectrum, and CTBN resources.
  • Adaptive feedback loops and calibration mechanisms in DTGS ensure real-time responsiveness, reduced latency, and improved computational accuracy despite workload fluctuations.

A Dynamic Temporal Granularity Scheduler is a class of resource management and computational control algorithms designed to adapt temporal partitioning and resource allocation in response to workload characteristics, application requirements, and system state. The concept spans theoretical, algorithmic, and practical concerns in multi-user GPU management, real-time inference workloads, spectrum markets, temporal inference in graphical models, and timeline summarization. Contemporary research emphasizes auction- or negotiation-based protocols, information-theoretic partitioning, and adaptive feedback mechanisms, each targeting optimality in utilization, fairness, latency, and computational efficiency.

1. Formal Model and Core Concepts

At its foundation, a Dynamic Temporal Granularity Scheduler (DTGS) formalizes system time as either continuous or finely discretized over a finite horizon [0,T][0,T]. Resources (e.g., MIG-enabled GPU slices, spectrum units, or cluster intervals in CTBNs) are partitioned such that available intervals, denoted by W\mathcal W, represent executable or allocatable time–capacity windows. Each scheduler iteration involves (i) announcing or selecting a temporal window ww^*, (ii) gathering job- or agent-specific subjob (or usage variant) proposals parameterized by feasible start times and durations, and (iii) clearing these proposals under resource and policy constraints using optimal or heuristic assignment algorithms (e.g., Weighted Interval Scheduling, marketplace matching, or cluster graph updates).

Formally, for GPU scheduling as in JASDA, the window space is

W=k=1K{(sk,ck,a,ba)    [a,b]Fk}\mathcal W = \bigcup_{k=1}^K \{(s_k, c_k, a, b-a)\;|\;[a,b]\in\mathcal F_k\}

where each resource slice sks_k has fixed capacity ckc_k, and Fk\mathcal F_k records its idle intervals. Jobs or agents decompose their work into eligible non-preemptive subjobs (variants) in response to ww^*, and the scheduler maximizes policy-weighted utility under interval and safety constraints (Konopa et al., 16 Oct 2025).

In spectrum management, the scheduler selects the temporal slot length ΔtiT\Delta t_i \in T (with TT the set of supported slot granularities) and broadcasts this to operators for demand/usage forecasting, then aggregates market-clearing across spatial, spectral, and temporal axes (Rasti et al., 19 Feb 2025).

In probabilistic graphical models (CTBNs), cluster graphs maintain temporal subintervals, dynamically split via a KL-divergence test to adapt granularity where variables exhibit high evolution rates, preserving inference accuracy while minimizing computation (Saria et al., 2012).

2. Window Announcement and Variant Generation

Scheduling proceeds by iteratively offering windows and eliciting job- or agent-local proposals tailored to the candidate window. In JASDA:

  • The scheduler selects w=(sk,ck,tmin,Δt)w^* = (s_k, c_k, t_{\min}, \Delta t) by earliest-start or policy-driven scoring (incorporating lead-time, fragmentation, or other priorities).
  • Jobs receiving ww^* enumerate feasible (ti,j,Δ~ti,j)(t_{i,j}, \tilde\Delta t_{i,j}) pairs, using Temporal Resource Profiles (TRPs) and probabilistic safety constraints:

Pr(maxt[ti,j,ti,j+Δ~ti,j]RAMi(t)>ckFMPi)θ\Pr\left(\max_{t\in [t_{i,j}, t_{i,j}+\tilde\Delta t_{i,j}]} \mathrm{RAM}_i(t) > c_k \mid \mathrm{FMP}_i\right) \leq \theta

  • Each variant proposal vv is annotated by a local utility vector capturing metrics such as predicted job turnaround, QoS compliance, or energy efficiency. Jobs submit bid sets Vi\mathcal V_i to the scheduler (Konopa et al., 16 Oct 2025).

Synchronous market-based scheduling in O-RAN likewise leverages operator-local forecasting and synthetic scenario generation (via discriminative and generative AI) for each candidate Δti\Delta t_i, before centralized slot selection and spectrum redistribution (Rasti et al., 19 Feb 2025).

3. Policy-Driven Clearing and Optimal Assignment

Policy clearing aggregates received variant/bid proposals and optimally resolves assignments subject to constraints. A widely used framework is Weighted Interval Scheduling (WIS):

  • Synthesized score:

Score(v)=λh~(v)+(1λ)f~sys(v)\mathrm{Score}(v) = \lambda \tilde h(v) + (1-\lambda) \tilde f_{\text{sys}}(v)

where h~(v)\tilde h(v) (job-side utility), f~sys(v)\tilde f_{\text{sys}}(v) (system-side utility), λ\lambda (policy weight), and fairness factors (e.g., job age) are combined.

  • Interval selection maximizes total score over non-overlapping intervals:

$\max_{S \subseteq V} \sum_{v \in S} \mathrm{Score}(v) \quad \text{subject to no-overlap in $[t_{\min}, t_{\min}+\Delta t]$}$

Solved efficiently in O(MlogM)\mathcal O(M \log M) via dynamic programming (M=VM = |V|) (Konopa et al., 16 Oct 2025).

Marketplace frameworks similar in spirit select slot index ii^* by maximizing utility minus cost and execution slot penalty:

J(i)=o[Uo(so,treq(i))Cbuy(so+(i))+Csell(so(i))]Cslot(Δti)J(i) = \sum_o [U_o(s_{o,t}^{\mathrm{req}}(i)) - C_{\mathrm{buy}}(s_o^+(i)) + C_{\mathrm{sell}}(s_o^-(i)) ] - C_{\mathrm{slot}}(\Delta t_i)

and broadcast the selected granularity and allocations (Rasti et al., 19 Feb 2025).

DTELS-style summarization frames granularity selection as a sequence of merge or split operations, orchestrated by information-centric merging scores and optimal timeline node alignment (Zhang et al., 2024).

4. Adaptive Feedback, Calibration, and Safety

Dynamic Temporal Granularity Schedulers embed online feedback–calibration loops and multiphase adaptivity:

  • Ex-ante and ex-post score calibration corrects strategic misreporting or model error, updating job trust coefficients via observed utility discrepancies,

ρJ=exp ⁣(κEv[ϵ(v)])\rho_J = \exp\!\bigl( -\kappa\,\mathbb E_v[\epsilon(v)] \bigr)

with ϵ(v)\epsilon(v) an aggregate feature mismatch (Konopa et al., 16 Oct 2025).

  • Job- or agent-level “age” increments avoid starvation, raising effective priority of unscheduled tasks over time.
  • In DREAM, an Adaptivity Engine tunes weight parameters (α,β)(\alpha, \beta) in the core MapScore function via finite-difference search to minimize UXCost as workload evolves. Search converges rapidly and does not block inference dispatch, enabling real-time responsiveness (Kim et al., 2022).
  • In CTBN inference, sepsets trigger local time-partition splits when KL divergence reduction surpasses threshold κ\kappa, efficiently adjusting approximation fidelity in fast-evolving regions without enforcing a uniform global discretization (Saria et al., 2012).

5. Performance Metrics and Experimental Results

Key metrics vary by application domain but typically incorporate both utility/efficiency and fairness:

Metric Context Quantitative Finding
Throughput, Latency GPU scheduling (JASDA) WIS clearing guarantees per-window score maximization (Konopa et al., 16 Oct 2025)
Jain’s Fairness Index GPU scheduling (JASDA) Fairness tuned by λ\lambda, age term (βage\beta_{\mathrm{age}})
Spectrum Utilization (%) O-RAN Dynamic Δt increases utilization (68%→82%) and profit (~35% gain) (Rasti et al., 19 Feb 2025)
UXCost (EDP-style) ML inference (DREAM) DREAM reduces UXCost 32.2–50.0% (geomean), up to 80–97% over baselines (Kim et al., 2022)
KL-divergence, Log-likelihood CTBN inference Dynamic splitting matches fine-uniform accuracy with order-of-magnitude savings (Saria et al., 2012)
Info, Factuality, Coherence Timeline Summ. (DTELS) LLM schedulers dominate extractive methods; best Info 36.8, Fact 94.6 (Zhang et al., 2024)

Across domains, adaptive temporal granularity scheduling yields substantial gains in utilization, system responsiveness, and computational efficiency, with explicit mechanisms to balance trade-offs among optimality, fairness, and overhead.

6. Scalability, Constraints, and Limitations

Scalability is achieved via decomposability (e.g., per-MIG-slice independence in JASDA), efficient scheduling solvers (e.g., DP for WIS), and asynchronous or localized adaptivity:

  • In JASDA, per-cycle cost is O(Mtgen+MlogM)\mathcal O(M \cdot t_{\mathrm{gen}} + M\log M), with MM variants per auctioned window. Total system overhead is quasi-linear in arrival rates and variant-per-job bounds (Konopa et al., 16 Oct 2025).
  • Spectrum management with dynamic Δt reduces packet drops and forecasting error at the cost of higher control-plane overhead; hybrid or hierarchical switching is needed to avoid oscillations below 10 ms granularity (Rasti et al., 19 Feb 2025).
  • DREAM ensures adaptivity at several behavioral granularities (task, model, operator) and maintains service under uncertain pipeline changes via lightweight online parameter search, converging within seconds (Kim et al., 2022).
  • In CTBNs, only fast-evolving regions require fine time-meshes, achieving computational savings proportional to the heterogeneity in process rates (Saria et al., 2012).

Limitations include sensitivity to model estimation quality, control-plane signaling burden at ultra-fine granularities, and remaining challenges in trust, security, and decentralized market mechanisms in distributed deployments.

7. Generalizations and Future Directions

The principles underlying Dynamic Temporal Granularity Scheduling are being extended toward joint space–time–resource optimization, decentralized market clearing protocols (including blockchain-enforced trust models), probabilistic and uncertainty-aware scheduling (e.g., Bayesian RNNs for forecasting), and smooth integration with hierarchical control architectures—spanning from fine-grained system loops to policy-setting at multi-second or longer horizons (Rasti et al., 19 Feb 2025).

Current research also pursues synergy with large-LLM–assisted summarization and control (as in DTELS), adaptive multi-level profiling (functional memory, traffic patterns, cluster evolution), and reinforcement-learning–driven pricing or utility shaping in multi-operator environments.

Dynamic Temporal Granularity Schedulers represent a unifying algorithmic abstraction for managing temporally heterogeneous, multi-agent, and complex-system resources with explicit adaptivity, feedback, and policy programmability across technical domains.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Dynamic Temporal Granularity Scheduler.