Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SAIH: A Scalable Evaluation Methodology for Understanding AI Performance Trend on HPC Systems (2212.03410v1)

Published 7 Dec 2022 in cs.DC, cs.AI, and cs.PF

Abstract: Novel AI technology has expedited various scientific research, e.g., cosmology, physics and bioinformatics, inevitably becoming a significant category of workload on high performance computing (HPC) systems. Existing AI benchmarks tend to customize well-recognized AI applications, so as to evaluate the AI performance of HPC systems under predefined problem size, in terms of datasets and AI models. Due to lack of scalability on the problem size, static AI benchmarks might be under competent to help understand the performance trend of evolving AI applications on HPC systems, in particular, the scientific AI applications on large-scale systems. In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications. To enable scalability, SAIH builds a set of novel mechanisms for augmenting problem sizes. As the data and model constantly scale, we can investigate the trend and range of AI performance on HPC systems, and further diagnose system bottlenecks. To verify our methodology, we augment a cosmological AI application to evaluate a real HPC system equipped with GPUs as a case study of SAIH.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiangsu Du (9 papers)
  2. Dongsheng Li (240 papers)
  3. Yingpeng Wen (2 papers)
  4. Jiazhi Jiang (2 papers)
  5. Dan Huang (19 papers)
  6. Xiangke Liao (17 papers)
  7. Yutong Lu (31 papers)

Summary

We haven't generated a summary for this paper yet.