λScale: Enabling Fast Scaling for Serverless Large Language Model Inference (2502.09922v1)
Abstract: Serverless computing has emerged as a compelling solution for cloud-based model inference. However, as modern LLMs continue to grow in size, existing serverless platforms often face substantial model startup overhead. This poses a significant challenge in efficiently scaling model instances to accommodate dynamic, bursty workloads commonly observed in real-world inference services. In this paper, we introduce {\lambda}Scale, an efficient serverless inference system to achieve fast model scaling. The key idea behind {\lambda}Scale is to leverage high-speed RDMA networks between GPU nodes for fast model multicast, while enabling distributed inference execution during model transmission -- referred to as "execute-while-load". {\lambda}Scale proposes an efficient model scaling scheme, {\lambda}Pipe, which supports adaptive model multicast and dynamically constructs execution pipelines across receiving nodes for collaborative, distributed inference. Additionally, {\lambda}Scale supports efficient model management across GPU and host memory, allowing fast scaling for models across different storage tiers. Evaluation results show that {\lambda}Scale enables fast model scaling and effectively handles load spikes, achieving up to 5x tail-latency improvement and 31.3% cost reduction compared to state-of-the-art solutions on real-world LLM inference traces.
- Minchen Yu (6 papers)
- Rui Yang (221 papers)
- Chaobo Jia (1 paper)
- Zhaoyuan Su (9 papers)
- Sheng Yao (1 paper)
- Tingfeng Lan (7 papers)
- Yuchen Yang (60 papers)
- Yue Cheng (32 papers)
- Wei Wang (1793 papers)
- Ao Wang (43 papers)
- Ruichuan Chen (16 papers)