Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Object Detection with Spiking Neural Networks on Automotive Event Data (2205.04339v1)

Published 9 May 2022 in cs.CV

Abstract: Automotive embedded algorithms have very high constraints in terms of latency, accuracy and power consumption. In this work, we propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications. Indeed, SNNs are more biologically realistic neural networks where neurons communicate using discrete and asynchronous spikes, a naturally energy-efficient and hardware friendly operating mode. Event data, which are binary and sparse in space and time, are therefore the ideal input for spiking neural networks. But to date, their performance was insufficient for automotive real-world problems, such as detecting complex objects in an uncontrolled environment. To address this issue, we took advantage of the latest advancements in matter of spike backpropagation - surrogate gradient learning, parametric LIF, SpikingJelly framework - and of our new \textit{voxel cube} event encoding to train 4 different SNNs based on popular deep learning networks: SqueezeNet, VGG, MobileNet, and DenseNet. As a result, we managed to increase the size and the complexity of SNNs usually considered in the literature. In this paper, we conducted experiments on two automotive event datasets, establishing new state-of-the-art classification results for spiking neural networks. Based on these results, we combined our SNNs with SSD to propose the first spiking neural networks capable of performing object detection on the complex GEN1 Automotive Detection event dataset.

Citations (80)

Summary

  • The paper introduces a novel voxel cube encoding that efficiently preserves temporal information while reducing computational load.
  • It demonstrates successful training of deep SNN architectures using surrogate gradient learning on automotive event datasets.
  • The study pioneers integrating SNNs with a single-shot detector framework, enabling low-power and effective object detection.

Object Detection with Spiking Neural Networks on Automotive Event Data

The paper "Object Detection with Spiking Neural Networks on Automotive Event Data" presents a significant contribution to the field of spiking neural networks (SNNs) and their application to automotive event data for object detection tasks. This research is particularly notable for leveraging the unique characteristics of SNNs, which are inspired by biological neural networks and known for their energy efficiency and low-latency processing capabilities, making them well-suited for embedded automotive applications.

Event cameras, which provide highly temporal resolution data by recording changes in brightness rather than capturing full frames, pose challenges for traditional deep learning models that are designed for frame-based inputs. This paper adopts an innovative approach by training SNNs directly on event data, thus circumventing the need for frame conversion and fully utilizing the asynchronous and sparse nature of event data.

Key Contributions

  1. Voxel Cube Encoding: The researchers introduced a novel event data representation called "voxel cubes." This encoding method reduces the number of timesteps while preserving the temporal information of events by utilizing the channel dimension. Such an approach is crucial in reducing the computational load without sacrificing the temporal precision, making it highly suitable for SNN applications.
  2. Training Deep SNNs: Utilizing recent advancements in spike backpropagation techniques, specifically surrogate gradient learning, the paper reports successful training of SNNs based on architectures derived from well-known convolutional neural networks (CNNs) like SqueezeNet, VGG, MobileNet, and DenseNet. This effort sets new benchmarks for SNNs in classification tasks on the Prophesee NCARS and GEN1 Automotive Classification datasets.
  3. First SNN for Object Detection: The paper also proposes the first SNN capable of object detection using the GEN1 Automotive Detection event dataset. By integrating SNNs with the SSD (Single-Shot Detector) framework, the researchers achieved notable mAP scores while maintaining a low number of parameters and operations, thereby opening the possibility of their deployment in real-world automotive applications.

Technical Insights

The research highlights the importance of event data preprocessing, specifically the voxel cube technique which significantly enhances SNN performance by encoding high temporal information efficiently. Additionally, the use of surrogate gradients, combined with parametric LIF neurons, facilitates learning in deeper SNNs, which traditionally struggled with training convergence.

Moreover, the authors address the sparsity and operation efficiency of SNNs by reporting metrics such as spike sparsity and accumulated operations (ACCs), crucial for assessing SNNs' suitability for deployment on energy-constrained hardware like neuromorphic chips.

Implications and Future Directions

The practical implications of this research are substantial, not only for the automotive industry but also for any domain requiring efficient and rapid processing of sparse, event-based data. The successful deployment of complete SNNs for task-specific applications could greatly enhance the viability of low-power, high-speed inference systems, potentially influencing the design of future neural network models and hardware.

In future developments, integration of this research into neuromorphic platforms, such as Intel’s Loihi, could further exploit the energy-efficient properties of SNNs, pushing the boundaries of what is achievable in embedded systems. The potential to extend these findings to more complex datasets and tasks, while maintaining or improving efficiency, opens exciting avenues for ongoing and future research in the field.

Youtube Logo Streamline Icon: https://streamlinehq.com