Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

λScale: Enabling Fast Scaling for Serverless Large Language Model Inference (2502.09922v1)

Published 14 Feb 2025 in cs.DC

Abstract: Serverless computing has emerged as a compelling solution for cloud-based model inference. However, as modern LLMs continue to grow in size, existing serverless platforms often face substantial model startup overhead. This poses a significant challenge in efficiently scaling model instances to accommodate dynamic, bursty workloads commonly observed in real-world inference services. In this paper, we introduce {\lambda}Scale, an efficient serverless inference system to achieve fast model scaling. The key idea behind {\lambda}Scale is to leverage high-speed RDMA networks between GPU nodes for fast model multicast, while enabling distributed inference execution during model transmission -- referred to as "execute-while-load". {\lambda}Scale proposes an efficient model scaling scheme, {\lambda}Pipe, which supports adaptive model multicast and dynamically constructs execution pipelines across receiving nodes for collaborative, distributed inference. Additionally, {\lambda}Scale supports efficient model management across GPU and host memory, allowing fast scaling for models across different storage tiers. Evaluation results show that {\lambda}Scale enables fast model scaling and effectively handles load spikes, achieving up to 5x tail-latency improvement and 31.3% cost reduction compared to state-of-the-art solutions on real-world LLM inference traces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Minchen Yu (6 papers)
  2. Rui Yang (221 papers)
  3. Chaobo Jia (1 paper)
  4. Zhaoyuan Su (9 papers)
  5. Sheng Yao (1 paper)
  6. Tingfeng Lan (7 papers)
  7. Yuchen Yang (60 papers)
  8. Yue Cheng (32 papers)
  9. Wei Wang (1793 papers)
  10. Ao Wang (43 papers)
  11. Ruichuan Chen (16 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com