Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey of Multi-Tenant Deep Learning Inference on GPU (2203.09040v3)

Published 17 Mar 2022 in cs.DC and cs.AR

Abstract: Deep Learning (DL) models have achieved superior performance. Meanwhile, computing hardware like NVIDIA GPUs also demonstrated strong computing scaling trends with 2x throughput and memory bandwidth for each generation. With such strong computing scaling of GPUs, multi-tenant deep learning inference by co-locating multiple DL models onto the same GPU becomes widely deployed to improve resource utilization, enhance serving throughput, reduce energy cost, etc. However, achieving efficient multi-tenant DL inference is challenging which requires thorough full-stack system optimization. This survey aims to summarize and categorize the emerging challenges and optimization opportunities for multi-tenant DL inference on GPU. By overviewing the entire optimization stack, summarizing the multi-tenant computing innovations, and elaborating the recent technological advances, we hope that this survey could shed light on new optimization perspectives and motivate novel works in future large-scale DL system optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fuxun Yu (39 papers)
  2. Di Wang (408 papers)
  3. Longfei Shangguan (11 papers)
  4. Minjia Zhang (54 papers)
  5. Chenchen Liu (24 papers)
  6. Xiang Chen (346 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.