Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hardware-Algorithm Co-design Enabling Processing-in-Pixel-in-Memory (P2M) for Neuromorphic Vision Sensors (2310.16844v1)

Published 7 Oct 2023 in cs.AR and eess.IV

Abstract: The high volume of data transmission between the edge sensor and the cloud processor leads to energy and throughput bottlenecks for resource-constrained edge devices focused on computer vision. Hence, researchers are investigating different approaches (e.g., near-sensor processing, in-sensor processing, in-pixel processing) by executing computations closer to the sensor to reduce the transmission bandwidth. Specifically, in-pixel processing for neuromorphic vision sensors (e.g., dynamic vision sensors (DVS)) involves incorporating asynchronous multiply-accumulate (MAC) operations within the pixel array, resulting in improved energy efficiency. In a CMOS implementation, low overhead energy-efficient analog MAC accumulates charges on a passive capacitor; however, the capacitor's limited charge retention time affects the algorithmic integration time choices, impacting the algorithmic accuracy, bandwidth, energy, and training efficiency. Consequently, this results in a design trade-off on the hardware aspect-creating a need for a low-leakage compute unit while maintaining the area and energy benefits. In this work, we present a holistic analysis of the hardware-algorithm co-design trade-off based on the limited integration time posed by the hardware and techniques to improve the leakage performance of the in-pixel analog MAC operations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Yang Chai. In-sensor computing for machine vision, 2020.
  2. A 1/2.3 inch 12.3 mpixel with on-chip 4.97 tops/w cnn processor back-illuminated stacked cmos image sensor. In 2021 IEEE International Solid-State Circuits Conference (ISSCC), volume 64, pages 154–156. IEEE, 2021.
  3. A 0.2-to-3.6 tops/w programmable convolutional imager soc with in-sensor current-domain ternary-weighted mac operations for feature extraction and region-of-interest detection. In 2021 IEEE International Solid-State Circuits Conference (ISSCC), volume 64, pages 118–120. IEEE, 2021.
  4. A processing-in-pixel-in-memory paradigm for resource-constrained tinyml applications. Scientific Reports, 12, 2022a. URL https://doi.org/10.1038/s41598-022-17934-1.
  5. P2M-DeTrack: Processing-in-pixel-in-memory for energy-efficient and real-time multi-object detection and tracking. In 2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC), pages 1–6, 2022b. doi:10.1109/VLSI-SoC54400.2022.9939582.
  6. In-sensor & neuromorphic computing are all you need for energy efficient computer vision. In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5, 2023. doi:10.1109/ICASSP49357.2023.10094902.
  7. A 128x128 120 db 15 μ𝜇\muitalic_μs latency asynchronous temporal contrast vision sensor. IEEE journal of solid-state circuits, 43(2):566–576, 2008.
  8. A 3.6 μ𝜇\muitalic_μs latency asynchronous frame-free event-driven dynamic-vision-sensor. IEEE Journal of Solid-State Circuits, 46(6):1443–1455, 2011.
  9. Event-based neuromorphic vision for autonomous driving: A paradigm shift for bio-inspired visual sensing and perception. IEEE Signal Processing Magazine, 37(4):34–49, 2020.
  10. Real-time 6dof pose relocalization for event cameras with stacked spatial lstm networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
  11. Event-based vision meets deep learning on steering prediction for self-driving cars. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5419–5427, 2018.
  12. Neuromorphic-p2m: processing-in-pixel-in-memory paradigm for neuromorphic image sensors. Frontiers in Neuroinformatics, 17:1144301, 2023.
  13. A 2.6 e-rms low-random-noise, 116.2 mw low-power 2-mp global shutter cmos image sensor with pixel-level adc and in-pixel memory. In 2021 Symposium on VLSI Technology, pages 1–2. IEEE, 2021.
  14. A 6.9 μ𝜇\muitalic_μm pixel-pitch 3d stacked global shutter cmos image sensor with 3m cu-cu connections. In 2019 International 3D Systems Integration Conference (3DIC), pages 1–2. IEEE, 2019.
  15. A very low-power cmos mixed-signal ic for implantable pacemaker applications. IEEE Journal of solid-state circuits, 39(12):2446–2456, 2004.
  16. A low power, fully event-based gesture recognition system. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 7388–7397, 2017.
  17. Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in Neuroscience, 9, 2015. URL https://www.frontiersin.org/articles/10.3389/fnins.2015.00437.
  18. Spikingjelly. https://github.com/fangwei123456/spikingjelly, 2020.
  19. ACE-SNN: Algorithm-hardware co-design of energy-efficient & low-latency deep spiking neural networks for 3D image recognition. Frontiers in Neuroscience, 16, 2022c. URL https://www.frontiersin.org/articles/10.3389/fnins.2022.815258.
  20. Training energy-efficient deep spiking neural networks with single-spike hybrid input encoding. In 2021 International Joint Conference on Neural Networks (IJCNN), volume 1, pages 1–8, 2021. doi:10.1109/IJCNN52387.2021.9534306.
Citations (1)

Summary

We haven't generated a summary for this paper yet.