Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism (2404.09526v2)

Published 15 Apr 2024 in cs.DC and cs.LG

Abstract: The context window of LLMs is rapidly increasing, leading to a huge variance in resource usage between different requests as well as between different phases of the same request. Restricted by static parallelism strategies, existing LLM serving systems cannot efficiently utilize the underlying resources to serve variable-length requests in different phases. To address this problem, we propose a new parallelism paradigm, elastic sequence parallelism (ESP), to elastically adapt to the variance between different requests and phases. Based on ESP, we design and build LoongServe, an LLM serving system that (1) improves computation efficiency by elastically adjusting the degree of parallelism in real-time, (2) improves communication efficiency by reducing key-value cache migration overhead and overlapping partial decoding communication with computation, and (3) improves GPU memory efficiency by reducing key-value cache fragmentation across instances. Our evaluation under diverse real-world datasets shows that LoongServe improves the maximum throughput by up to 3.85$\times$ compared to the chunked prefill and 5.81$\times$ compared to the prefill-decoding disaggregation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Bingyang Wu (7 papers)
  2. Shengyu Liu (5 papers)
  3. Yinmin Zhong (11 papers)
  4. Peng Sun (210 papers)
  5. Xuanzhe Liu (59 papers)
  6. Xin Jin (285 papers)
Citations (18)
X Twitter Logo Streamline Icon: https://streamlinehq.com