Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators (2310.04607v1)

Published 6 Oct 2023 in cs.PF, cs.AI, cs.AR, and cs.LG

Abstract: AI methods have become critical in scientific applications to help accelerate scientific discovery. LLMs are being considered as a promising approach to address some of the challenging problems because of their superior generalization capabilities across domains. The effectiveness of the models and the accuracy of the applications is contingent upon their efficient execution on the underlying hardware infrastructure. Specialized AI accelerator hardware systems have recently become available for accelerating AI applications. However, the comparative performance of these AI accelerators on LLMs has not been previously studied. In this paper, we systematically study LLMs on multiple AI accelerators and GPUs and evaluate their performance characteristics for these models. We evaluate these systems with (i) a micro-benchmark using a core transformer block, (ii) a GPT- 2 model, and (iii) an LLM-driven science use case, GenSLM. We present our findings and analyses of the models' performance to better understand the intrinsic capabilities of AI accelerators. Furthermore, our analysis takes into account key factors such as sequence lengths, scaling behavior, sparsity, and sensitivity to gradient accumulation steps.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Murali Emani (17 papers)
  2. Sam Foreman (9 papers)
  3. Varuni Sastry (2 papers)
  4. Zhen Xie (17 papers)
  5. Siddhisanket Raskar (3 papers)
  6. William Arnold (4 papers)
  7. Rajeev Thakur (16 papers)
  8. Venkatram Vishwanath (26 papers)
  9. Michael E. Papka (25 papers)
Citations (9)