Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Demystifying Platform Requirements for Diverse LLM Inference Use Cases (2406.01698v1)

Published 3 Jun 2024 in cs.AR, cs.AI, cs.DC, and cs.LG

Abstract: LLMs have shown remarkable performance across a wide range of applications, often outperforming human experts. However, deploying these parameter-heavy models efficiently for diverse inference use cases requires carefully designed hardware platforms with ample computing, memory, and network resources. With LLM deployment scenarios and models evolving at breakneck speed, the hardware requirements to meet SLOs remains an open research question. In this work, we present an analytical tool, GenZ, to study the relationship between LLM inference performance and various platform design parameters. Our analysis provides insights into configuring platforms for different LLM workloads and use cases. We quantify the platform requirements to support SOTA LLMs models like LLaMA and GPT-4 under diverse serving settings. Furthermore, we project the hardware capabilities needed to enable future LLMs potentially exceeding hundreds of trillions of parameters. The trends and insights derived from GenZ can guide AI engineers deploying LLMs as well as computer architects designing next-generation hardware accelerators and platforms. Ultimately, this work sheds light on the platform design considerations for unlocking the full potential of LLMs across a spectrum of applications. The source code is available at https://github.com/abhibambhaniya/GenZ-LLM-Analyzer .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Abhimanyu Bambhaniya (2 papers)
  2. Ritik Raj (9 papers)
  3. Geonhwa Jeong (12 papers)
  4. Souvik Kundu (76 papers)
  5. Sudarshan Srinivasan (20 papers)
  6. Midhilesh Elavazhagan (3 papers)
  7. Madhu Kumar (2 papers)
  8. Tushar Krishna (87 papers)
Citations (5)
Github Logo Streamline Icon: https://streamlinehq.com