Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PerLLM: Personalized Inference Scheduling with Edge-Cloud Collaboration for Diverse LLM Services (2405.14636v1)

Published 23 May 2024 in cs.DC and cs.NI

Abstract: With the rapid growth in the number of LLM users, it is difficult for bandwidth-constrained cloud servers to simultaneously process massive LLM services in real-time. Recently, edge-cloud infrastructures have been used to improve the processing efficiency of large-scale LLM services. However, the diversity of task requirements and the dynamics of resources pose great challenges to inference scheduling, leading to the wastage of many resources. In this paper, we present PerLLM, a personalized inference scheduling framework with edge-cloud collaboration designed for diverse LLM services. For the complexity of multiple constraints and the decision-making process of edge-cloud collaboration, we integrate the upper confidence bound algorithm based on the constraint satisfaction mechanism in PerLLM. For diverse LLM services, PerLLM can optimize service scheduling and resource allocation solutions within the edge-cloud infrastructure to meet processing time requirements while minimizing energy costs. Experimental results from different model deployments show that PerLLM can effectively meet the processing time requirements of personalized services. Compared to other methods, PerLLM achieves 2.2x, 2.1x, and 1.6x throughput and reduces the energy cost by more than 50%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zheming Yang (6 papers)
  2. Yuanhao Yang (2 papers)
  3. Chang Zhao (6 papers)
  4. Qi Guo (237 papers)
  5. Wenkai He (2 papers)
  6. Wen Ji (20 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets