Papers
Topics
Authors
Recent
Search
2000 character limit reached

EcoServe: Designing Carbon-Aware AI Inference Systems

Published 7 Feb 2025 in cs.DC | (2502.05043v2)

Abstract: The rapid increase in LLM ubiquity and scale levies unprecedented demands on computing infrastructure. These demands not only incur large compute and memory resources but also significant energy, yielding large operational and embodied carbon emissions. In this work, we present three main observations based on modeling and traces from the production deployment of two Generative AI services in a major cloud service provider. First, while GPUs dominate operational carbon, host processing systems (e.g., CPUs, memory, storage) dominate embodied carbon. Second, offline, batch inference accounts for a significant portion (up to 55\%) of serving capacity. Third, there are different levels of heterogeneity across hardware and workloads for LLM inference. Based on these observations, we design EcoServe, a carbon-aware resource provision and scheduling framework for LLM serving systems. It is based on four principles - Reduce, Reuse, Rightsize, and Recycle (4R). With a cross-stack ILP formulation and design, we demonstrate that EcoServe can lower carbon emissions by up to 47\%, compared to performance, energy, and cost-optimized design points, while maintaining performance targets and SLOs.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.