Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Flow-Based Visual Stream Compression for Event Cameras (2403.08086v1)

Published 12 Mar 2024 in cs.CV

Abstract: As the use of neuromorphic, event-based vision sensors expands, the need for compression of their output streams has increased. While their operational principle ensures event streams are spatially sparse, the high temporal resolution of the sensors can result in high data rates from the sensor depending on scene dynamics. For systems operating in communication-bandwidth-constrained and power-constrained environments, it is essential to compress these streams before transmitting them to a remote receiver. Therefore, we introduce a flow-based method for the real-time asynchronous compression of event streams as they are generated. This method leverages real-time optical flow estimates to predict future events without needing to transmit them, therefore, drastically reducing the amount of data transmitted. The flow-based compression introduced is evaluated using a variety of methods including spatiotemporal distance between event streams. The introduced method itself is shown to achieve an average compression ratio of 2.81 on a variety of event-camera datasets with the evaluation configuration used. That compression is achieved with a median temporal error of 0.48 ms and an average spatiotemporal event-stream distance of 3.07. When combined with LZMA compression for non-real-time applications, our method can achieve state-of-the-art average compression ratios ranging from 10.45 to 17.24. Additionally, we demonstrate that the proposed prediction algorithm is capable of performing real-time, low-latency event prediction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. C. Brandli, R. Berner, M. Yang, S. C. Liu, and T. Delbruck, “A 240 × 180 130 dB 3 μ𝜇\muitalic_μs latency global shutter spatiotemporal vision sensor,” IEEE Journal of Solid-State Circuits, vol. 49, no. 10, 2014.
  2. Z. Bi, S. Dong, Y. Tian, and T. Huang, “Spike coding for dynamic vision sensors,” in 2018 Data Compression Conference.   IEEE, 2018, pp. 117–126.
  3. S. Dong, Z. Bi, Y. Tian, and T. Huang, “Spike coding for dynamic vision sensor in intelligent driving,” IEEE Internet of Things Journal, vol. 6, no. 1, pp. 60–71, 2018.
  4. N. Khan, K. Iqbal, and M. G. Martini, “Lossless compression of data from static and mobile dynamic vision sensors-performance and trade-offs,” IEEE Access, vol. 8, pp. 103 149–103 163, 2020.
  5. D. Blalock, S. Madden, and J. Guttag, “Sprintz: Time series compression for the internet of things,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 2, no. 3, pp. 1–23, 2018.
  6. J. Alakuijala, A. Farruggia, P. Ferragina, E. Kliuchnikov, R. Obryk, Z. Szabadka, and L. Vandevenne, “Brotli: A general-purpose data compressor,” ACM Transactions on Information Systems (TOIS), vol. 37, no. 1, pp. 1–30, 2018.
  7. I. Schiopu and R. C. Bilcu, “Lossless compression of event camera frames,” IEEE Signal Processing Letters, vol. 29, pp. 1779–1783, 2022.
  8. ——, “Low-complexity lossless coding for memory-efficient representation of event camera frames,” IEEE Sensors Letters, vol. 6, no. 11, pp. 1–4, 2022.
  9. ——, “Low-complexity lossless coding of asynchronous event sequences for low-power chip integration,” Sensors, vol. 22, no. 24, p. 10014, 2022.
  10. ——, “Entropy coding-based lossless compression of asynchronous event sequences,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 3922–3929.
  11. M. Martini, J. Adhuran, and N. Khan, “Lossless compression of neuromorphic vision sensor data based on point cloud representation,” IEEE Access, vol. 10, pp. 121 352–121 364, 2022.
  12. B. Huang and T. Ebrahimi, “Event data stream compression based on point cloud representation,” in 2023 IEEE International Conference on Image Processing (ICIP).   IEEE, 2023, pp. 3120–3124.
  13. N. Khan, K. Iqbal, and M. G. Martini, “Time-aggregation-based lossless video encoding for neuromorphic vision sensor data,” IEEE Internet of Things Journal, vol. 8, no. 1, pp. 596–609, 2020.
  14. S. Banerjee, Z. W. Wang, H. H. Chopp, O. Cossairt, and A. K. Katsaggelos, “Lossy event compression based on image-derived quad trees and poisson disk sampling,” in 2021 IEEE International Conference on Image Processing (ICIP).   IEEE, 2021, pp. 2154–2158.
  15. S. Khaidem, M. Sharma, and A. Nevatia, “A novel approach for neuromorphic vision data compression based on deep belief network,” arXiv preprint arXiv:2210.15362, 2022.
  16. B. Huang, D. Nachtigall Lazzarotto, and T. Ebrahimi, “Evaluation of the impact of lossy compression on event camera-based computer vision tasks,” in Applications of Digital Image Processing XLVI, 2023.
  17. G. Gallego, T. Delbruck, G. M. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, and D. Scaramuzza, “Event-based Vision: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
  18. K. A. Boahen, “Point-to-point connectivity between neuromorphic chips using address events,” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 47, no. 5, 2000.
  19. R. Benosman, C. Clercq, X. Lagorce, S. H. Ieng, and C. Bartolozzi, “Event-based visual flow,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 2, 2014.
  20. M. Liu and T. Delbruck, “Block-matching optical flow for dynamic vision sensors: Algorithm and FPGA implementation,” in Proceedings - IEEE International Symposium on Circuits and Systems, 2017.
  21. ——, “Adaptive time-slice block-matching optical flow algorithm for dynamic vision sensors,” in British Machine Vision Conference 2018, BMVC 2018, 2019.
  22. H. Akolkar, S. H. Ieng, and R. Benosman, “Real-time high speed motion prediction using fast aperture-robust event-driven visual flow,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
  23. D. C. Stumpp, H. Akolkar, A. D. George, and R. B. Benosman, “harms: A hardware acceleration architecture for real-time event-based optical flow,” arXiv preprint arXiv:2112.06772, 2021.
  24. A. Zhu, L. Yuan, K. Chaney, and K. Daniilidis, “EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras,” in Robotics: Science and Systems XIV.   Robotics: Science and Systems Foundation, 6 2018.
  25. M. Gehrig, M. Millhäusler, D. Gehrig, and D. Scaramuzza, “E-raft: Dense optical flow from event cameras,” in 2021 International Conference on 3D Vision (3DV).   IEEE, 2021, pp. 197–206.
  26. Z. Teed and J. Deng, “Raft: Recurrent all-pairs field transforms for optical flow,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16.   Springer, 2020, pp. 402–419.
  27. J. E. Bresenham, “Algorithm for computer control of a digital plotter,” in Seminal graphics: pioneering efforts that shaped the field, 1998, pp. 1–6.
  28. D. R. Musser, “Introspective sorting and selection algorithms,” Software: Practice and Experience, vol. 27, no. 8, pp. 983–993, 1997.
  29. M. Liu and T. Delbruck, “Edflow: Event driven optical flow camera with keypoint detection and adaptive block matching,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 9, pp. 5776–5789, 2022.
  30. J. Li, Y. Fu, S. Dong, Z. Yu, T. Huang, and Y. Tian, “Asynchronous spatiotemporal spike metric for event cameras,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  31. B. Rueckauer and T. Delbruck, “Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor,” Frontiers in Neuroscience, vol. 10, no. APR, 2016.
  32. E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza, “The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM,” International Journal of Robotics Research, vol. 36, no. 2, 2017.
  33. A. Z. Zhu, D. Thakur, T. Özaslan, B. Pfrommer, V. Kumar, and K. Daniilidis, “The Multivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception,” IEEE Robotics and Automation Letters, vol. 3, no. 3, 2018.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com