Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reducing Energy Bloat in Large Model Training (2312.06902v3)

Published 12 Dec 2023 in cs.LG and cs.DC

Abstract: Training large AI models on numerous GPUs consumes a massive amount of energy, making power delivery one of the largest limiting factors in building and operating datacenters for AI workloads. However, we observe that not all energy consumed during training directly contributes to end-to-end throughput; a significant portion can be removed without slowing down training. We call this portion energy bloat. In this work, we identify two independent sources of energy bloat in large model training and propose Perseus, a training system that mitigates both. To do this, Perseus obtains the time--energy tradeoff frontier of a large model training job using an efficient graph cut-based algorithm, and schedules computation energy consumption across time to reduce both types of energy bloat. Evaluation on large models, including GPT-3 and Bloom, shows that Perseus reduces the energy consumption of large model training by up to 30% without any throughput loss or hardware modification.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jae-Won Chung (8 papers)
  2. Yile Gu (25 papers)
  3. Insu Jang (5 papers)
  4. Luoxi Meng (2 papers)
  5. Nikhil Bansal (61 papers)
  6. Mosharaf Chowdhury (39 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.