Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inshrinkerator: Compressing Deep Learning Training Checkpoints via Dynamic Quantization (2306.11800v3)

Published 20 Jun 2023 in cs.LG

Abstract: With the increase in the scale of Deep Learning (DL) training workloads in terms of compute resources and time consumption, the likelihood of encountering in-training failures rises substantially, leading to lost work and resource wastage. Such failures are typically offset by a checkpointing mechanism, which comes at the cost of storage and network bandwidth overhead. State-of-the-art approaches involve lossy model compression mechanisms, which induce a tradeoff between the resulting model quality (accuracy) and compression ratio. Delta compression is then used to further reduce the overhead by only storing the difference between consecutive checkpoints. We make a key enabling observation that the sensitivity of model weights to compression varies during training, and different weights benefit from different quantization levels (ranging from retaining full precision to pruning). We propose (1) a non-uniform quantization scheme that leverages this variation, (2) an efficient search mechanism that dynamically finds the best quantization configurations, and (3) a quantization-aware delta compression mechanism that rearranges weights to minimize checkpoint differences, thereby maximizing compression. We instantiate these contributions in Inshrinkerator - a framework for DL workload checkpoint compression. Our experiments show that Inshrinkerator consistently achieves a better tradeoff between accuracy and compression ratios compared to prior works, enabling a compression ratio up to 39x and withstanding up to 10 restores with negligible accuracy impact for fault-tolerant training. Inshrinkerator achieves at least an order of magnitude reduction in checkpoint storage overhead for training failure recovery as well as transfer learning use cases without any loss of accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Amey Agrawal (10 papers)
  2. Sameer Reddy (1 paper)
  3. Satwik Bhattamishra (13 papers)
  4. Venkata Prabhakara Sarath Nookala (2 papers)
  5. Vidushi Vashishth (4 papers)
  6. Kexin Rong (14 papers)
  7. Alexey Tumanov (30 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.