Multi-view Disparity Estimation Using a Novel Gradient Consistency Model (2405.17029v1)
Abstract: Variational approaches to disparity estimation typically use a linearised brightness constancy constraint, which only applies in smooth regions and over small distances. Accordingly, current variational approaches rely on a schedule to progressively include image data. This paper proposes the use of Gradient Consistency information to assess the validity of the linearisation; this information is used to determine the weights applied to the data term as part of an analytically inspired Gradient Consistency Model. The Gradient Consistency Model penalises the data term for view pairs that have a mismatch between the spatial gradients in the source view and the spatial gradients in the target view. Instead of relying on a tuned or learned schedule, the Gradient Consistency Model is self-scheduling, since the weights evolve as the algorithm progresses. We show that the Gradient Consistency Model outperforms standard coarse-to-fine schemes and the recently proposed progressive inclusion of views approach in both rate of convergence and accuracy.
- Z. Tu, W. Xie, D. Zhang, R. Poppe, R. C. Veltkamp, B. Li, and J. Yuan, “A survey of variational and cnn-based optical flow techniques,” Signal Processing: Image Communication, vol. 72, pp. 9–24, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0923596518302479
- D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: A survey,” Computer Vision and Image Understanding, vol. 134, pp. 1–21, 2015. [Online]. Available: http://dx.doi.org/10.1016/j.cviu.2015.02.008http://www.sciencedirect.com/science/article/pii/S1077314215000429
- R. A. Hamzah and H. Ibrahim, “Literature Survey on Stereo Vision Disparity Map Algorithms,” Journal of Sensors, vol. 2016, p. 8742920, Dec. 2015, publisher: Hindawi Publishing Corporation. [Online]. Available: https://doi.org/10.1155/2016/8742920
- Q. Chen and V. Koltun, “Full flow: Optical flow estimation by global optimization over regular grids,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 4706–4714.
- J. Xu, R. Ranftl, and V. Koltun, “Accurate optical flow via direct cost volume processing,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 5807–5815.
- X. Sui, S. Li, X. Geng, Y. Wu, X. Xu, Y. Liu, R. Goh, and H. Zhu, “Craft: Cross-attentional flow transformer for robust optical flow,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 17 581–17 590.
- J. T. Zachary and Deng, “Raft: Recurrent all-pairs field transforms for optical flow,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. Springer International Publishing, 2020, pp. 402–419.
- H. Laga, L. V. Jospin, F. Boussaid, and M. Bennamoun, “A survey on deep learning techniques for stereo-based depth estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 4, pp. 1738–1764, 2020.
- Z. Huang, X. Shi, C. Zhang, Q. Wang, K. C. Cheung, H. Qin, J. Dai, and H. Li, “Flowformer: A transformer architecture for optical flow,” in Computer Vision – ECCV 2022, S. Avidan, G. Brostow, M. Cissé, G. M. Farinella, and T. Hassner, Eds. Cham: Springer Nature Switzerland, 2022, pp. 668–685.
- A. Jaegle, S. Borgeaud, J.-B. Alayrac, C. Doersch, C. Ionescu, D. Ding, S. Koppula, A. Brock, E. Shelhamer, O. J. H’enaff, M. M. Botvinick, A. Zisserman, O. Vinyals, and J. Carreira, “Perceiver io: A general architecture for structured inputs & outputs,” ArXiv, vol. abs/2107.14795, 2021.
- A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2758–2766.
- L. Xu, J. Jia, and Y. Matsushita, “Motion detail preserving optical flow estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 1744–1757, Sep. 2012.
- J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid, “Epicflow: Edge-preserving interpolation of correspondences for optical flow,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 1164–1172.
- Y. Hu, R. Song, and Y. Li, “Efficient coarse-to-fine patchmatch for large displacement optical flow,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
- T.-H. Tran, Z. Wang, and S. Simon, “Variational disparity estimation framework for plenoptic images,” in 2017 IEEE International Conference on Multimedia and Expo (ICME), 2017, pp. 1189–1194.
- S. I. Young, A. T. Naman, and D. Taubman, “Graph Laplacian Regularization for Robust Optical Flow Estimation,” IEEE Transactions on Image Processing, vol. 29, pp. 3970–3983, 2019.
- T.-H. Tran, G. Mammadov, and S. Simon, “Gvld: A fast and accurate gpu-based variational light-field disparity estimation approach,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 7, pp. 2562–2574, July 2021.
- M. Roxas and T. Oishi, “Variational fisheye stereo,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1303–1310, April 2020.
- S. Rao and H. Wang, “Optical flow estimation via weighted guided filtering with non-local steering kernel,” The Visual Computer, vol. 39, no. 3, pp. 835–845, Mar. 2023. [Online]. Available: https://doi.org/10.1007/s00371-021-02349-2
- S. Ma, B. M. Smith, and M. Gupta, “Differential Scene Flow from Light Field Gradients,” International Journal of Computer Vision, vol. 128, no. 3, pp. 679–697, 2019. [Online]. Available: https://doi.org/10.1007/s11263-019-01230-z
- Y. Deng, J. Xiao, S. Z. Zhou, and J. Feng, “Detail preserving coarse-to-fine matching for stereo matching and optical flow,” IEEE Transactions on Image Processing, vol. 30, pp. 5835–5847, 2021.
- J. L. Gray, A. T. Naman, and D. S. Taubman, “Gradient consistency based multi-scale optical flow,” in 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), Sep. 2022, pp. 1–6.
- T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in European conference on computer vision. Springer, 2004, pp. 25–36.
- P. W. Holland and R. E. Welsch, “Robust regression using iteratively reweighted least-squares,” Communications in Statistics - Theory and Methods, vol. 6, no. 9, pp. 813–827, 1977. [Online]. Available: https://doi.org/10.1080/03610927708827533
- K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4D light fields,” in Asian Conference on Computer Vision 2016, vol. 10113 LNCS, 2016, pp. 19–34.
- J. L. Gray, A. T. Naman, and D. S. Taubman, “Welsch based multiview disparity estimation,” in 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 3223–3227.
- M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems 1,” Journal of Research of the National Bureau of Standards, vol. 49, 1952.
- D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their principles,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2010, pp. 2432–2439.
- H. Hirschmüller and D. Scharstein, “Evaluation of cost functions for stereo matching,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.