Papers
Topics
Authors
Recent
2000 character limit reached

Event-based Cameras (EBCs)

Updated 9 December 2025
  • Event-based Cameras (EBCs) are bio-inspired sensors that trigger events on brightness changes, offering high temporal resolution and low latency.
  • They employ asynchronous pixel-level computations with sparse spatiotemporal data, significantly cutting down computational costs compared to frame-based approaches.
  • EBCs have demonstrated real-time object recognition and detection performance in challenging lighting conditions, enhancing energy efficiency and robustness.

Event-based Cameras (EBCs) are bio-inspired vision sensors that operate fundamentally differently from conventional frame-based cameras. Rather than capturing full image frames at fixed intervals, EBCs report localized brightness changes asynchronously and independently for each pixel, emitting "events" when the change in logarithmic pixel intensity surpasses a preset threshold. This design yields outstanding temporal resolution, high dynamic range, and extremely low latency, enabling high-speed, low-power sensing under challenging lighting and dynamic conditions. EBCs are central to modern neuromorphic vision research, offering sparse spatiotemporal data streams ideal for real-time computer vision, robotics, scientific imaging, and embedded processing scenarios (Messikommer et al., 2020).

1. Operating Principle and Pixel-Level Architecture

EBC pixels continuously monitor their own log-intensity L(x,t)=logI(x,t)L(x, t) = \log I(x, t) and fire an event when

L(x,tk)L(x,tk1)pkCL(x, t_k) - L(x, t_{k-1}) \geq p_k C

where CC is a programmable contrast threshold and pk{+1,1}p_k \in \{+1, -1\} is the event polarity indicating intensity increase or decrease (Messikommer et al., 2020, Melamed et al., 10 Sep 2025). Events are timestamped with sub-microsecond precision and take the form

ek=(xk,yk,tk,pk)e_k = (x_k, y_k, t_k, p_k)

with independent, asynchronous operation across all pixels.

Key architectural elements include:

  • Logarithmic photoreceptors for dynamic range compression
  • On-pixel memory storing last reference intensity
  • Digital comparators for thresholding in both polarities
  • Asynchronous event readout via address-event representation (AER) (Xiao et al., 2022)

EBCs typically achieve >120 dB dynamic range, <10 μs latency, and microsecond timestamping. Power draw is 10–50 mW for the core sensor, substantially lower than frame sensors (Hoang, 2023).

2. Data Representations and Conventional Handling

While the raw event stream is sparse and temporally fine-grained, most existing machine learning algorithms require synchronous, dense inputs. Thus, events are often accumulated into image-like tensors:

  • Two-channel histograms by polarity: H+(x,y;Δt)H^+(x, y; \Delta t), H(x,y;Δt)H^-(x, y; \Delta t)
  • Time surfaces: recency maps normalized within a fixed window
  • Temporal Binary Representation (TBR): multi-bin binary encoding at each pixel (Iddrisu et al., 19 Aug 2024)
  • Voxel grids: spatio-temporal binning for 3D CNNs

A common pipeline is to aggregate NN events into a frame H(u,c)H(u, c) and process with standard CNNs—at the cost of discarding sparsity and reintroducing latency proportional to the number of events (Messikommer et al., 2020).

3. Asynchronous, Sparse Processing and Network Conversion

Event-sparsity affords unique advantages for neural network inference. The methodology proposed in (Messikommer et al., 2020) converts traditionally synchronous CNNs trained on image-like event data to asynchronous, sparse networks:

  • Sparse Recursive Representations (SRR) maintain CNN state and update only activations affected by a new event
  • Submanifold Sparse Convolution (SSC) applies convolution kernels only at "active sites" AtA_t where event data is nonzero
  • Update rules propagate activation increments per event through layers, referencing only the local rulebook Rk,nR_{k, n}

Formally, for each event, affected input sites F0F_0 are determined, and sparse updates are propagated layer-wise using a chain of incremental rulebooks. One-shot SSC evaluation is used for new activations and deactivations. By induction, this yields outputs strictly equivalent to the dense, frame-based computation, with computational complexity drastically reduced. This reduction scales as

Casync sparse/Cdense(1+(k1)l)γNframe pixels1C_{\text{async sparse}} / C_{\text{dense}} \lesssim \frac{(1+(k-1)l)^\gamma}{N_{\text{frame pixels}}} \ll 1

where γ\gamma is the empirical fractal dimension of event-edged data (typically γ1.3\gamma \approx 1.3–$1.7$) (Messikommer et al., 2020).

4. Exploiting Spatio-Temporal Sparsity: Complexity and Latency Reduction

Event streams are sparse both spatially (edges, motion boundaries) and temporally (no events in static regions). The number of active sites per layer nln_l grows sublinearly with receptive field size, yielding practical cost reductions up to 20×20\times compared to dense CNN inference. Unlike frame-based pipelines—with cost scaling with H×W×CH \times W \times C—the asynchronous method may update only 1\sim 12%2\% of all activations per event (Messikommer et al., 2020).

Memory and computational overhead drops accordingly: for N-Caltech101 recognition, accuracy is preserved at $0.745$ (top-1) with $8$–19×19\times FLOP reduction per event for asynchronous sparse networks (vs. $0.761$ accuracy for dense CNN) (Messikommer et al., 2020).

5. Applications and Experimental Validation

Event-based cameras and asynchronous sparse networks have demonstrated high performance in key computer vision domains:

  • Object recognition (N-Caltech101, N-Cars): VGG-style architectures using sliding-window event histograms, achieving $8$–19×19\times lower computation per event with comparable accuracy to synchronous state-of-the-art (Messikommer et al., 2020)
  • Object detection (N-Caltech101, Gen1 Automotive): YOLO-style detectors, with mean average precision (mAP) maintained at $0.615$–$0.643$ but at $5$–8×8\times fewer FLOPs than asynchronous or synchronous competitors (Messikommer et al., 2020)

Importantly, the framework of asynchronous updates is agnostic to event-representation, network architecture, and task. It is compatible with ResNet, U-Net, SSD, YOLO architectures and supports various event input formats (histograms, time surfaces, learned embeddings) so long as sparse, local updates can be computed (Messikommer et al., 2020).

6. Integration Strategies, Hardware Implications, and Broader Impact

The recommended integration steps for deploying asynchronous sparse inference are:

  1. Select a sparse-updatable event embedding H(u,c)H(u, c)
  2. Train a standard CNN on dense HH-frames using backpropagation
  3. At deployment, swap each convolution/pool/ReLU with SSC-based asynchronous update counterparts
  4. Maintain activation state and rulebooks for each layer, updating only local regions per incoming event

This approach yields smaller memory footprint, reduced data movement, and lower power requirements—enabling operation on neuromorphic processors and real-time embedded systems. By preserving native event sparsity, computational cost is minimized and latencies at sub-millisecond scale are achieved without loss of inference quality (Messikommer et al., 2020).

7. Limitations and Prospective Directions

The main restriction of sparse processing is the need to choose event representations admitting local, incremental updates. Representations relying on global statistics may not be compatible. The approach does not require changes to training; only the deployment engine differs. Event-based pipelines remain sensitive to input noise, jitter, and the specific spatial/temporal distribution of events. Future research may expand hardware co-design for ultra-low-latency event processing and generalize the framework to streaming data beyond vision modalities.

Event-based cameras, when combined with asynchronous sparse convolutional networks, unlock efficient, latency-minimal, high-dynamic-range vision solutions and represent a convergence point between neuromorphic hardware and algorithmic innovation in modern real-time machine perception (Messikommer et al., 2020).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Event-based Cameras (EBCs).