Papers
Topics
Authors
Recent
2000 character limit reached

Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware (1812.01739v2)

Published 4 Dec 2018 in cs.LG and stat.ML

Abstract: Using Intel's Loihi neuromorphic research chip and ABR's Nengo Deep Learning toolkit, we analyze the inference speed, dynamic power consumption, and energy cost per inference of a two-layer neural network keyword spotter trained to recognize a single phrase. We perform comparative analyses of this keyword spotter running on more conventional hardware devices including a CPU, a GPU, Nvidia's Jetson TX1, and the Movidius Neural Compute Stick. Our results indicate that for this inference application, Loihi outperforms all of these alternatives on an energy cost per inference basis while maintaining equivalent inference accuracy. Furthermore, an analysis of tradeoffs between network size, inference speed, and energy cost indicates that Loihi's comparative advantage over other low-power computing devices improves for larger networks.

Citations (176)

Summary

Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware

The paper "Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware" conducts a comprehensive analysis of energy efficiency in keyword spotting applications using Intel's Loihi neuromorphic research chip in comparison to conventional hardware platforms. By leveraging neuromorphic hardware, which incorporates neuron-like processing elements, the research aims to capitalize on the purported advantages of heightened power efficiency and reduced latency through an event-driven computational model and architectural parallelism.

The paper implements a keyword spotting task, which necessitates the real-time processing of audio signals to detect a specified phrase, serving as a timely benchmark for evaluating neuromorphic devices. The paper utilizes ABR's Nengo Deep Learning toolkit to deploy a two-layer neural network designed for this purpose. Crucially, Loihi's performance is compared to more conventional alternatives, including a CPU, GPU, Jetson TX1, and Movidius Neural Compute Stick (NCS).

Results and Analysis

The findings indicate that Loihi markedly surpasses other hardware alternatives in terms of energy cost per inference, retaining comparable inference accuracy. Indeed, Loihi's advantage grows with larger network sizes, suggesting scalable energy efficiency improvements. The mean energy cost per inference on Loihi is reported at 0.00027 Joules/Inference, juxtaposed against CPU (0.0063), GPU (0.0298), Jetson TX1 (0.0056), and Movidius NCS (0.0015), underscoring a more than tenfold reduction in energy expenditure compared to the next most efficient device.

The scaling experiments bolster these results, depicting Loihi's superior scalability characteristics. As network size increases, Loihi maintains real-time inference capabilities, unlike Movidius NCS, whose architectural constraints impede processing speed in larger models. Figures and tables illustrate these findings with quantitative precision, leveraging comprehensive power logging and analysis protocols across varying inference configurations.

Discussion and Implications

The implications of this research are both practical and theoretical. Practically, the substantial energy efficiency gains demonstrated by Loihi have significant repercussions for mobile and IoT devices where power consumption is a critical constraint. Theoretical implications rest on Loihi's architectural attributes, specific to its deployment of spiking neural networks, which could pave the way for the implementation of more complex neural architectures on edge devices under strict power budgets.

Looking forward, advancements in neuromorphic processors, such as enhanced parameter optimization and network architecture modifications, will be vital in further enhancing computational performance. The paper suggests that these devices may embody a practical path forward for sophisticated real-time applications beyond keyword spotting, with potential broad applications in speech recognition and other real-time, data-intensive domains.

In conclusion, the paper effectively highlights the ostensible advantage of neuromorphic hardware in keyword spotting applications, demonstrating substantial power efficiency gains without compromising on accuracy. These findings emphasize the transformative potential of neuromorphic architectures in optimizing AI-driven tasks, particularly in environments dominated by energy constraints.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.