Improving Fast Auto-Focus with Event Polarity (2303.08611v2)
Abstract: Fast and accurate auto-focus in adverse conditions remains an arduous task. The emergence of event cameras has opened up new possibilities for addressing the challenge. This paper presents a new high-speed and accurate event-based focusing algorithm. Specifically, the symmetrical relationship between the event polarities in focusing is investigated, and the event-based focus evaluation function is proposed based on the principles of the event cameras and the imaging model in the focusing process. Comprehensive experiments on the public event-based autofocus dataset (EAD) show the robustness of the model. Furthermore, precise focus with less than one depth of focus is achieved within 0.004 seconds on our self-built high-speed focusing platform. The dataset and code will be made publicly available.
- G. Gallego, H. Rebecq, and D. Scaramuzza, “A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018), pp. 3867–3876.
- A. Mitrokhin, C. Fermüller, C. Parameshwara, and Y. Aloimonos, “Event-based moving object detection and tracking,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (IEEE, 2018), pp. 1–9.
- E. Mueggler, B. Huber, and D. Scaramuzza, “Event-based, 6-dof pose tracking for high-speed maneuvers,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IEEE, 2014), pp. 2761–2768.
- P. Bardow, A. J. Davison, and S. Leutenegger, “Simultaneous optical flow and intensity estimation from an event camera,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 884–892.
- L. Sun, C. Sakaridis, J. Liang, Q. Jiang, K. Yang, P. Sun, Y. Ye, K. Wang, and L. V. Gool, “Event-based fusion for motion deblurring with cross-modal attention,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVIII, (Springer, 2022), pp. 412–428.
- L. Sun, C. Sakaridis, J. Liang, P. Sun, J. Cao, K. Zhang, Q. Jiang, K. Wang, and L. Van Gool, “Event-based frame interpolation with ad-hoc deblurring,” \JournalTitlearXiv preprint arXiv:2301.05191 (2023).
- J. Chen, H. Shi, Y. Ye, K. Yang, L. Sun, and K. Wang, “Efficient human pose estimation via 3d event point cloud,” \JournalTitlearXiv preprint arXiv:2206.04511 (2022).
- N. N. K. Chern, P. A. Neow, and M. H. Ang, “Practical issues in pixel-based autofocusing for machine vision,” in Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), vol. 3 (IEEE, 2001), pp. 2791–2796.
- L. Firestone, K. Cook, K. Culp, N. Talsania, and K. Preston Jr, “Comparison of autofocus methods for automated microscopy,” \JournalTitleCytometry: The Journal of the International Society for Analytical Cytology 12, 195–206 (1991).
- K. De and V. Masilamani, “Image sharpness measure for blurred images in frequency domain,” \JournalTitleProcedia Engineering 64, 149–158 (2013).
- C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240× 180 130 db 3 µs latency global shutter spatiotemporal vision sensor,” \JournalTitleIEEE Journal of Solid-State Circuits 49, 2333–2341 (2014).
- D. P. Moeys, F. Corradi, C. Li, S. A. Bamford, L. Longinotti, F. F. Voigt, S. Berry, G. Taverni, F. Helmchen, and T. Delbruck, “A sensitive dynamic and active pixel vision sensor for color or neural imaging applications,” \JournalTitleIEEE transactions on biomedical circuits and systems 12, 123–136 (2017).
- iniVation, “How to focus an event camera,” https://inivation.gitlab.io/dv/dv-docs/docs/focus/ (2022).
- S. Lin, Y. Zhang, L. Yu, B. Zhou, X. Luo, and J. Pan, “Autofocus for event cameras,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), pp. 16344–16353.
- Y. Bao, “Polarity-based auto-focus sample code,” figshare (2023) [retrieved 10 March 2023], https://opticapublishing.figshare.com/s/53810544f9724a567e31.
- S. Lin, Y. Zhang, L. Yu, B. Zhou, X. Luo, and J. Pan, “Autofocus for event cameras-supplemental material,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022).
- Y. Bao, “Event-based auto-focus dataset (ead),” figshare (2023) [retrieved 10 March 2023], https://opticapublishing.figshare.com/s/a7b499ac97e7b8b88cf5.
- Y. Bao and Y. Ma, “Event camera high-speed auto-focus dataset,” figshare (2023) [retrieved 10 March 2023], https://opticapublishing.figshare.com/s/cb8644c446bb4bab8ad0.
- H. Mir, P. Xu, and P. Van Beek, “An extensive empirical evaluation of focus measures for digital photography,” in Digital Photography X, vol. 9023 (SPIE, 2014), pp. 167–177.
- C.-C. Chan and H. H. Chen, “Improving the reliability of phase detection autofocus,” \JournalTitleElectronic Imaging 2018, 241–1 (2018).
- Y. Bao, “Conventional algorithm repeated focus,” figshare (2023) [retrieved 23 May 2023], https://opticapublishing.figshare.com/s/e7d7caaf07b8860f4ee9.
- Z. Ge, H. Wei, F. Xu, Y. Gao, Z. Chu, H. K.-H. So, and E. Y. Lam, “Millisecond autofocusing microscopy using neuromorphic event sensing,” \JournalTitleOptics and Lasers in Engineering 160, 107247 (2023).
- C. Brandli, L. Muller, and T. Delbruck, “Real-time, high-speed video decompression using a frame-and event-based davis sensor,” in 2014 IEEE International Symposium on Circuits and Systems (ISCAS), (IEEE, 2014), pp. 686–689.
- Z. Wang, Y. Ng, P. van Goor, and R. Mahony, “Event camera calibration of per-pixel biased contrast threshold,” \JournalTitlearXiv preprint arXiv:2012.09378 (2020).
- I. A. Neil, “Evolution of zoom lens optical design technology and manufacture,” \JournalTitleOptical Engineering 60, 051211–051211 (2021).
- J. Goodsell, V. Blahnik, and J. P. Rolland, “Method for minimizing lens breathing with one moving group,” \JournalTitleOpt. Express 30, 19494–19509 (2022).
- Y. Bao and Y. Ma, “Event camera simulation sample code,” figshare (2023) [retrieved 10 March 2023], https://opticapublishing.figshare.com/s/b206d056e4eb13202d1f.
- Y. Hu, S.-C. Liu, and T. Delbruck, “v2e: From video frames to realistic dvs events,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2021), pp. 1312–1321.