Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GNNIE: GNN Inference Engine with Load-balancing and Graph-Specific Caching (2105.10554v2)

Published 21 May 2021 in cs.AR and cs.LG

Abstract: Graph neural networks (GNN) analysis engines are vital for real-world problems that use large graph models. Challenges for a GNN hardware platform include the ability to (a) host a variety of GNNs, (b) handle high sparsity in input vertex feature vectors and the graph adjacency matrix and the accompanying random memory access patterns, and (c) maintain load-balanced computation in the face of uneven workloads, induced by high sparsity and power-law vertex degree distributions. This paper proposes GNNIE, an accelerator designed to run a broad range of GNNs. It tackles workload imbalance by (i)~splitting vertex feature operands into blocks, (ii)~reordering and redistributing computations, (iii)~using a novel flexible MAC architecture. It adopts a graph-specific, degree-aware caching policy that is well suited to real-world graph characteristics. The policy enhances on-chip data reuse and avoids random memory access to DRAM. GNNIE achieves average speedups of 21233x over a CPU and 699x over a GPU over multiple datasets on graph attention networks (GATs), graph convolutional networks (GCNs), GraphSAGE, GINConv, and DiffPool. Compared to prior approaches, GNNIE achieves an average speedup of 35x over HyGCN (which cannot implement GATs) for GCN, GraphSAGE, and GINConv, and, using 3.4x fewer processing units, an average speedup of 2.1x over AWB-GCN (which runs only GCNs).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sudipta Mondal (12 papers)
  2. Susmita Dey Manasi (4 papers)
  3. Kishor Kunal (6 papers)
  4. S. Ramprasath (13 papers)
  5. Sachin S. Sapatnekar (29 papers)
Citations (12)