Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices (2406.03777v3)

Published 6 Jun 2024 in cs.LG and cs.AI

Abstract: The scaling laws have become the de facto guidelines for designing LLMs, but they were studied under the assumption of unlimited computing resources for both training and inference. As LLMs are increasingly used as personalized intelligent assistants, their customization (i.e., learning through fine-tuning) and deployment onto resource-constrained edge devices will become more and more prevalent. An urging but open question is how a resource-constrained computing environment would affect the design choices for a personalized LLM. We study this problem empirically in this work. In particular, we consider the tradeoffs among a number of key design factors and their intertwined impacts on learning efficiency and accuracy. The factors include the learning methods for LLM customization, the amount of personalized data used for learning customization, the types and sizes of LLMs, the compression methods of LLMs, the amount of time afforded to learn, and the difficulty levels of the target use cases. Through extensive experimentation and benchmarking, we draw a number of surprisingly insightful guidelines for deploying LLMs onto resource-constrained devices. For example, an optimal choice between parameter learning and RAG may vary depending on the difficulty of the downstream task, the longer fine-tuning time does not necessarily help the model, and a compressed LLM may be a better choice than an uncompressed LLM to learn from limited personalized data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Ruiyang Qin (15 papers)
  2. Dancheng Liu (17 papers)
  3. Zheyu Yan (23 papers)
  4. Zhaoxuan Tan (35 papers)
  5. Zhenge Jia (12 papers)
  6. Meng Jiang (126 papers)
  7. Ahmed Abbasi (20 papers)
  8. Jinjun Xiong (118 papers)
  9. Yiyu Shi (136 papers)
  10. Chenhui Xu (15 papers)
  11. Amir Nassereldine (14 papers)
  12. Jiajie Li (27 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.