Learnable Sparsification of Die-to-Die Communication via Spike-Based Encoding (2501.08645v2)
Abstract: Efficient communication is central to both biological and AI systems. In biological brains, the challenge of long-range communication across regions is addressed through sparse, spike-based signaling, minimizing energy and latency. Conversely, modern AI workloads are increasingly constrained by bandwidth, leading to bottlenecks that hamper scalability and efficiency. Inspired by the brain's ability to execute dynamic and complex local computations coupled with sparse inter-neuron communication, we propose heterogeneous neural networks that combine spiking neural networks (SNNs) and artificial neural networks (ANNs) at bandwidth-limited regions, such as chip boundaries, where spike-based communication reduces data transfer overhead. Within each chip, dense ANN computations maintain high throughput, accuracy, and robustness. While SNNs have struggled to algorithmically scale, our approach surmounts this long-standing challenge through algorithm-architecture co-design where learnable sparsity is employed for die-to-die communication by confining spiking layers to specific partitions. This composable design combines high ANN performance with low-bandwidth SNN efficiency. Evaluations on language processing and computer vision exhibit up to 5.3x energy efficiency gains and 15.2x latency reductions, surpassing both purely spiking and non-spiking models. As model size grows, improvements scale accordingly. By targeting the inter-chip communication bottleneck with biologically inspired methods, this approach presents a promising path to more efficient AI systems.