Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InferCept: Efficient Intercept Support for Augmented Large Language Model Inference (2402.01869v2)

Published 2 Feb 2024 in cs.LG, cs.CL, and cs.DC

Abstract: LLMs are increasingly integrated with external environments, tools, and agents like ChatGPT plugins to extend their capability beyond language-centric tasks. However, today's LLM inference systems are designed for standalone LLMs. They treat each external interaction as the end of LLM generation and form a new request when the interaction finishes, causing unnecessary recomputation of already computed contexts, which accounts for 37-40% of total model forwarding time. This paper presents InferCept, the first LLM inference framework targeting augmented LLMs and supporting the efficient interception of LLM generation. InferCept minimizes the GPU resource waste caused by LLM interceptions and dedicates saved memory for serving more requests. InferCept improves the overall serving throughput by 1.6x-2x and completes 2x more requests per second compared to the state-of-the-art LLM inference systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Reyna Abhyankar (5 papers)
  2. Zijian He (31 papers)
  3. Vikranth Srivatsa (4 papers)
  4. Hao Zhang (947 papers)
  5. Yiying Zhang (34 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets