Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GraphChallenge.org Sparse Deep Neural Network Performance (2004.01181v2)

Published 25 Mar 2020 in cs.LG, cs.CV, cs.NE, and stat.ML

Abstract: The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems. The sparse DNN challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment. In 2019 several sparse DNN challenge submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art sparse DNN execution time, $T_{\rm DNN}$, is a strong function of the number of DNN operations performed, $N_{\rm op}$. The sparse DNN challenge provides a clear picture of current sparse DNN systems and underscores the need for new innovations to achieve high performance on very large sparse DNNs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jeremy Kepner (141 papers)
  2. Simon Alford (5 papers)
  3. Vijay Gadepally (131 papers)
  4. Michael Jones (92 papers)
  5. Lauren Milechin (55 papers)
  6. Albert Reuther (74 papers)
  7. Ryan Robinett (3 papers)
  8. Sid Samsi (6 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.