Papers
Topics
Authors
Recent
Search
2000 character limit reached

Carbon-Aware Schedulers in Green Computing

Updated 14 April 2026
  • Carbon-aware schedulers are computational systems designed to minimize emissions by aligning job scheduling with real-time or forecasted carbon intensity data and workload flexibility.
  • They employ temporal shifting and spatial allocation strategies, including greedy heuristics and LP-based solutions, to optimize job placements under resource and deadline constraints.
  • Their modular architectures integrate historical metadata, carbon forecasts, and user constraints to effectively reduce greenhouse gas emissions across varied digital infrastructures.

A carbon-aware scheduler is a computational system component responsible for assigning jobs, workflows, or resources in a way that explicitly minimizes or controls operational carbon emissions, typically by leveraging temporal or spatial flexibility with respect to carbon-intensity signals of the underlying energy infrastructure. Carbon-aware scheduling appears throughout the lifecycle of modern computing—including cloud datacenters, CI/CD pipelines, edge computing, distributed data movement, federated learning, and manufacturing—serving as a principal lever to reduce operational greenhouse gas emissions in digital infrastructure.

1. Core Modeling Principles and Architectures

Carbon-aware schedulers explicitly incorporate real-time or forecasted carbon-intensity data, operational workload flexibility (in time, space, or both) and system constraints such as deadlines, SLAs, throughput, or cost. The canonical scheduler accepts the following data:

  • Workload descriptors: Job/task definitions, estimated resource requirements, time-flexibility (e.g., deadlines, start time windows).
  • Carbon-intensity signals: Time-series or point forecasts Cr,tC_{r,t} for region rr, time tt, or more granular entities (e.g., network hops).
  • Historical metadata: Runtime distributions, prior job durations, dependency graphs.
  • User-constraints: Regions, latency/throughput SLOs, cost limits.

Architectures are modular, typically comprising a frontend to ingest jobs and user data, a preprocessor to filter or annotate jobs, a metadata store for historical runtime and dependency data, a Carbon Forecast API interface for carbon-scoring, a scheduler core to solve the scheduling subproblems, an execution engine for deferred runs, and a realtime feedback loop to collect actual outcomes (Claßen et al., 2023).

2. Mathematical Formulation and Algorithmic Strategies

2.1 Temporal Carbon Shifting

Carbon-aware scheduling for temporal flexibility generally formulates the problem as:

mintCtLt\min \sum_{t} C_t \cdot L_t

subject to deadline and capacity constraints, where LtL_t is the load scheduled at (or shifted to) time tt, and CtC_t is the carbon intensity (Acun et al., 2022).

The CAS (Carbon Explorer) model, for example, discretizes time, splits workloads into inflexible/flexible fractions, and enforces capacity, balance, and shift bounds. It enables deferral of a fraction FF of loads and subsequent "pull-in" into lower-carbon windows. Algorithmic approaches include greedy heuristics (flexible jobs are moved from high-carbon to low-carbon hours, subject to deferral and hardware headroom) (Acun et al., 2022), as well as LP-based exact solutions for smaller scales (Rodrigues et al., 4 Jun 2025).

2.2 Spatial and Spatio-temporal Scheduling

Schedulers for geo-distributed services (e.g., serverless, web services, federated learning) map requests or jobs to execution sites rr chosen to minimize aggregated

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Carbon-Aware Schedulers.