IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge (2303.14162v1)
Abstract: In this paper, we propose IMA-GNN as an In-Memory Accelerator for centralized and decentralized Graph Neural Network inference, explore its potential in both settings and provide a guideline for the community targeting flexible and efficient edge computation. Leveraging IMA-GNN, we first model the computation and communication latencies of edge devices. We then present practical case studies on GNN-based taxi demand and supply prediction and also adopt four large graph datasets to quantitatively compare and analyze centralized and decentralized settings. Our cross-layer simulation results demonstrate that on average, IMA-GNN in the centralized setting can obtain ~790x communication speed-up compared to the decentralized GNN setting. However, the decentralized setting performs computation ~1400x faster while reducing the power consumption per device. This further underlines the need for a hybrid semi-decentralized GNN approach.
- Mehrdad Morsali (8 papers)
- Mahmoud Nazzal (11 papers)
- Abdallah Khreishah (37 papers)
- Shaahin Angizi (29 papers)