Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LazyBatching: An SLA-aware Batching System for Cloud Machine Learning Inference (2010.13103v1)

Published 25 Oct 2020 in cs.DC, cs.AR, cs.LG, and cs.NE

Abstract: In cloud ML inference systems, batching is an essential technique to increase throughput which helps optimize total-cost-of-ownership. Prior graph batching combines the individual DNN graphs into a single one, allowing multiple inputs to be concurrently executed in parallel. We observe that the coarse-grained graph batching becomes suboptimal in effectively handling the dynamic inference request traffic, leaving significant performance left on the table. This paper proposes LazyBatching, an SLA-aware batching system that considers both scheduling and batching in the granularity of individual graph nodes, rather than the entire graph for flexible batching. We show that LazyBatching can intelligently determine the set of nodes that can be efficiently batched together, achieving an average 15x, 1.5x, and 5.5x improvement than graph batching in terms of average response time, throughput, and SLA satisfaction, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yujeong Choi (8 papers)
  2. Yunseong Kim (2 papers)
  3. Minsoo Rhu (30 papers)
Citations (60)

Summary

We haven't generated a summary for this paper yet.