Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Deep Ordinal Distortion Estimation Approach for Distortion Rectification (2007.10689v2)

Published 21 Jul 2020 in cs.CV

Abstract: Distortion is widely existed in the images captured by popular wide-angle cameras and fisheye cameras. Despite the long history of distortion rectification, accurately estimating the distortion parameters from a single distorted image is still challenging. The main reason is these parameters are implicit to image features, influencing the networks to fully learn the distortion information. In this work, we propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency. Our key insight is that distortion rectification can be cast as a problem of learning an ordinal distortion from a single distorted image. To solve this problem, we design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution. In contrast to the implicit distortion parameters, the proposed ordinal distortion have more explicit relationship with image features, and thus significantly boosts the distortion perception of neural networks. Considering the redundancy of distortion information, our approach only uses a part of distorted image for the ordinal distortion estimation, showing promising applications in the efficient distortion rectification. To our knowledge, we first unify the heterogeneous distortion parameters into a learning-friendly intermediate representation through ordinal distortion, bridging the gap between image feature and distortion rectification. The experimental results demonstrate that our approach outperforms the state-of-the-art methods by a significant margin, with approximately 23% improvement on the quantitative evaluation while displaying the best performance on visual appearance. The code is available at https://github.com/KangLiao929/OrdinalDistortion.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. G. Li, Y. Gan, H. Wu, N. Xiao, and L. Lin, “Cross-modal attentional context learning for rgb-d object detection,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1591–1601, 2019.
  2. P. Zhang, W. Liu, H. Lu, and C. Shen, “Salient object detection with lossless feature reflection and weighted structural loss,” IEEE Transactions on Image Processing, vol. 28, no. 6, pp. 3048–3060, 2019.
  3. D. Tao, Y. Guo, Y. Li, and X. Gao, “Tensor rank preserving discriminant analysis for facial recognition,” IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 325–334, 2017.
  4. B. Kang and T. Q. Nguyen, “Random forest with learned representations for semantic segmentation,” IEEE Transactions on Image Processing, vol. 28, no. 7, pp. 3542–3555, 2019.
  5. C. Redondo-Cabrera, M. Baptista-Ríos, and R. J. López-Sastre, “Learning to exploit the prior network knowledge for weakly supervised semantic segmentation,” IEEE Transactions on Image Processing, vol. 28, no. 7, pp. 3649–3661, 2019.
  6. H. Li, X. He, D. Tao, Y. Tang, and R. Wang, “Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning,” Pattern Recognition, vol. 79, pp. 130–146, 2018.
  7. Y. Hou, J. Xu, M. Liu, G. Liu, L. Liu, F. Zhu, and L. Shao, “Nlh: A blind pixel-level non-local method for real-world image denoising,” IEEE Transactions on Image Processing, vol. 29, pp. 5121–5135, 2020.
  8. J. Rong, S. Huang, Z. Shang, and X. Ying, “Radial lens distortion correction using convolutional neural networks trained with synthesized images,” in Asian Conference on Computer Vision, pp. 35–49, 2016.
  9. X. Yin, X. Wang, J. Yu, M. Zhang, P. Fua, and D. Tao, “FishEyeRecNet: A multi-context collaborative deep network for fisheye image rectification,” in European Conference on Computer Vision, pp. 469–484, 2018.
  10. Z. Xue, N. Xue, G.-S. Xia, and W. Shen, “Learning to calibrate straight lines for fisheye image rectification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1643–1651, 2019.
  11. X. Li, B. Zhang, P. V. Sander, and J. Liao, “Blind geometric distortion correction on images through deep learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4855–4864, 2019.
  12. K. Liao, C. Lin, Y. Zhao, and M. Xu, “Model-free distortion rectification framework bridged by distortion distribution map,” IEEE Transactions on Image Processing, vol. 29, pp. 3707–3718, 2020.
  13. K. Liao, C. Lin, Y. Zhao, and M. Gabbouj, “Dr-gan: Automatic radial distortion rectification using conditional gan in real-time,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 3, pp. 725–733, 2020.
  14. M. Lopez, R. Mari, P. Gargallo, Y. Kuang, J. Gonzalez-Jimenez, and G. Haro, “Deep single image camera calibration with radial distortion,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11817–11825, 2019.
  15. Q. Zhang, C. Zhang, J. Ling, Q. Wang, and J. Yu, “A generic multi-projection-center model and calibration method for light field cameras,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 11, pp. 2539–2552, 2019.
  16. X. Chen and Y. Yang, “A closed-form solution to single underwater camera calibration using triple wavelength dispersion and its application to single camera 3d reconstruction,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4553–4561, 2017.
  17. Y. Bok, H. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 2, pp. 287–300, 2017.
  18. S. B. Kang, “Catadioptric self-calibration,” in IEEE International Conference on Computer Vision, 2000.
  19. S. Ramalingam, P. F. Sturm, and S. K. Lodha, “Generic self-calibration of central cameras,” Computer Vision and Image Understanding, vol. 114, pp. 210–219, 2010.
  20. F. Espuny, “Generic self-calibration of central cameras from two rotational flows,” International Journal of Computer Vision, vol. 91, pp. 131–145, 2007.
  21. F. Bukhari and M. N. Dailey, “Automatic radial distortion estimation from a single image,” Journal of Mathematical Imaging & Vision, vol. 45, no. 1, pp. 31–45, 2013.
  22. A. W. Fitzgibbon, “Simultaneous linear estimation of multiple view geometry and lens distortion,” in IEEE Conference on Computer Vision and Pattern Recognition, 2001.
  23. M. Alemánflores, L. Alvarez, L. Gomez, and D. Santanacedrés, “Automatic lens distortion correction using one-parameter division models,” Image Processing on Line, vol. 4, 2014.
  24. D. Santana-Cedrés, L. Gomez, M. Alemán-Flores, A. Salgado, J. Esclarín, L. Mazorra, and L. Alvarez, “An iterative optimization algorithm for lens distortion correction using two-parameter models,” Image Processing On Line, vol. 6, pp. 326–364, 2016.
  25. Z. Tang, R. G. von Gioi, P. Monasse, and J.-M. Morel, “A precision analysis of camera distortion models,” IEEE Transactions on Image Processing, vol. 26, no. 6, pp. 2694–2704, 2017.
  26. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2015.
  27. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
  28. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826, 2016.
  29. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, pp. 1097–1105, 2012.
  30. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2016.
  31. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European Conference on Computer Vision, pp. 740–755, 2014.
  32. D. Scaramuzza, A. Martinelli, and R. Siegwart, “A toolbox for easily calibrating omnidirectional cameras,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5695–5701, 2006.
  33. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 6, pp. 1137–1149, 2017.
  34. K. Liao, C. Lin, Y. Zhao, M. Gabbouj, and Y. Zheng, “OIDC-Net: Omnidirectional image distortion correction via coarse-to-fine region attention,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 1, pp. 222–231, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Kang Liao (37 papers)
  2. Chunyu Lin (48 papers)
  3. Yao Zhao (272 papers)
Citations (24)