Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Microsaccade-inspired Event Camera for Robotics (2405.17769v1)

Published 28 May 2024 in cs.RO and cs.CV

Abstract: Neuromorphic vision sensors or event cameras have made the visual perception of extremely low reaction time possible, opening new avenues for high-dynamic robotics applications. These event cameras' output is dependent on both motion and texture. However, the event camera fails to capture object edges that are parallel to the camera motion. This is a problem intrinsic to the sensor and therefore challenging to solve algorithmically. Human vision deals with perceptual fading using the active mechanism of small involuntary eye movements, the most prominent ones called microsaccades. By moving the eyes constantly and slightly during fixation, microsaccades can substantially maintain texture stability and persistence. Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining low reaction time and stable texture. In this design, a rotating wedge prism was mounted in front of the aperture of an event camera to redirect light and trigger events. The geometrical optics of the rotating wedge prism allows for algorithmic compensation of the additional rotational motion, resulting in a stable texture appearance and high informational output independent of external motion. The hardware device and software solution are integrated into a system, which we call Artificial MIcrosaccade-enhanced EVent camera (AMI-EV). Benchmark comparisons validate the superior data quality of AMI-EV recordings in scenarios where both standard cameras and event cameras fail to deliver. Various real-world experiments demonstrate the potential of the system to facilitate robotics perception both for low-level and high-level vision tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. Intel, Intel RealSenseTM Depth Camera D435. https://www.intelrealsense.com/depth-camera-d435/ (2023).
  2. A. Mittal, R. Soundararajan, A. C. Bovik, Making a “completely blind” image quality analyzer, IEEE Signal processing letters 20, 209–212 (2012).
  3. Prophesee, High-speed counting unlocked by event-based vision (2020).
  4. DJI, RoboMaster M2006 P36 Brushless DC Gear Motor. https://www.robomaster.com/en-US/products/components/detail/1277 (2023).
  5. iniVation AG, DVXplorer. https://inivation.com/ (2023).
  6. DJI, RoboMaster Development Board Type C. https://www.robomaster.com/en-US/products/components/general/development-board-type-c/info (2023).
  7. Intel, Intel NUC 10 Performance kit - NUC10i7FNH. https://www.intel.com/content/www/us/en/products/sku/188811/intel-nuc-10-performance-kit-nuc10i7fnh/specifications.html (2023).
  8. Intel, Intel Core i7-10710U Processor. https://www.intel.com/content/www/us/en/products/sku/196448/intel-core-i710710u-processor-12m-cache-up-to-4-70-ghz/specifications.html (2023).
Citations (10)

Summary

We haven't generated a summary for this paper yet.