Papers
Topics
Authors
Recent
2000 character limit reached

TSAR-MVS: Textureless-aware Segmentation and Correlative Refinement Guided Multi-View Stereo

Published 19 Aug 2023 in cs.CV | (2308.09990v4)

Abstract: The reconstruction of textureless areas has long been a challenging problem in MVS due to lack of reliable pixel correspondences between images. In this paper, we propose the Textureless-aware Segmentation And Correlative Refinement guided Multi-View Stereo (TSAR-MVS), a novel method that effectively tackles challenges posed by textureless areas in 3D reconstruction through filtering, refinement and segmentation. First, we implement the joint hypothesis filtering, a technique that merges a confidence estimator with a disparity discontinuity detector to eliminate incorrect depth estimations. Second, to spread the pixels with confident depth, we introduce an iterative correlation refinement strategy that leverages RANSAC to generate 3D planes based on superpixels, succeeded by a weighted median filter for broadening the influence of accurately determined pixels. Finally, we present a textureless-aware segmentation method that leverages edge detection and line detection for accurately identify large textureless regions for further depth completion. Experiments on ETH3D, Tanks & Temples and Strecha datasets demonstrate the superior performance and strong generalization capability of our proposed method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. A. Kuhn, H. Hirschmüller, D. Scharstein, and H. Mayer, “A TV prior for high-quality scalable multi-view stereo reconstruction,” Int. J. Comput. Vis., vol. 124, pp. 2–17, 2017.
  2. S. Shen, “Accurate multiple view 3d reconstruction using patch-based stereo for large-scale scenes,” IEEE Trans. Image Process., vol. 22, no. 5, pp. 1901–1914, 2013.
  3. Q. Shan, B. Curless, Y. Furukawa, C. Hernandez, and S. M. Seitz, “Occluding contours for multi-view stereo,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2014.
  4. E. Zheng, E. Dunn, V. Jojic, and J. Frahm, “Patchmatch based joint view selection and depthmap estimation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2014, pp. 1510–1517.
  5. Y. Furukawa and J. Ponce, “Accurate, dense, and robust multiview stereopsis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 8, pp. 1362–1376, 2010.
  6. Y. Yao, Z. Luo, S. Li, T. Fang, and L. Quan, “Mvsnet: Depth inference for unstructured multi-view stereo,” in Proc. Eur. Conf. Comput. Vis. (ECCV), September 2018.
  7. K. Luo, T. Guan, L. Ju, H. Huang, and Y. Luo, “P-mvsnet: Learning patch-wise matching confidence aggregation for multi-view stereo,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2019, pp. 10 451–10 460.
  8. J. Y. Lee, J. DeGol, C. Zou, and D. Hoiem, “Patchmatch-rl: Deep mvs with pixelwise depth, normal, and visibility,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), October 2021, pp. 6158–6167.
  9. P. Heise, S. Klose, B. Jensen, and A. Knoll, “Pm-huber: Patchmatch with huber regularization for stereo matching,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), December 2013.
  10. A. Kuhn, S. Lin, and O. Erdler, “Plane completion and filtering for multi-view stereo reconstruction,” in Pattern Recognition, 2019, pp. 18–32.
  11. Q. Xu and W. Tao, “Multi-scale geometric consistency guided multi-view stereo,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2019.
  12. ——, “Planar prior assisted patchmatch multi-view stereo,” Proc. of the AAAI Conf. Artif. Intell. (AAAI), vol. 34, no. 07, pp. 12 516–12 523, 2020.
  13. A. Romanoni and M. Matteucci, “Tapa-mvs: Textureless-aware patchmatch multi-view stereo,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2019.
  14. F. Wang, S. Galliani, C. Vogel, P. Speciale, and M. Pollefeys, “Patchmatchnet: Learned multi-view patchmatch stereo,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 14 194–14 203.
  15. Q. Xu, W. Kong, W. Tao, and M. Pollefeys, “Multi-Scale Geometric Consistency Guided and Planar Prior Assisted Multi-View Stereo,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–18, 2022.
  16. C. Zhang, Z. Li, Y. Cheng, R. Cai, H. Chao, and Y. Rui, “Meshstereo: A global stereo model with mesh alignment regularization for view interpolation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), December 2015.
  17. Y. Wang, Z. Zeng, T. Guan, W. Yang, Z. Chen, W. Liu, L. Xu, and Y. Luo, “Adaptive patch deformation for textureless-resilient multi-view stereo,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2023, pp. 1621–1630.
  18. J. Yang, W. Mao, J. M. Alvarez, and M. Liu, “Cost volume pyramid based depth inference for multi-view stereo,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2020, pp. 4876–4885.
  19. F. Wang, S. Galliani, C. Vogel, and M. Pollefeys, “Itermvs: Iterative probability estimation for efficient multi-view stereo,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022, pp. 8606–8615.
  20. C. Sormann, E. Santellani, M. Rossi, A. Kuhn, and F. Fraundorfer, “DELS-MVS: deep epipolar line search for multi-view stereo,” in Proc. IEEE/CVF Winter Conf. Appl. of Comput. Vis. (WACV), January 2023, pp. 3086–3095.
  21. X. Ma, Y. Gong, Q. Wang, J. Huang, L. Chen, and F. Yu, “Epp-mvsnet: Epipolar-assembling based depth prediction for multi-view stereo,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2021, pp. 5712–5720.
  22. G. Vogiatzis, C. Hernandez Esteban, P. H. Torr, and R. Cipolla, “Multiview stereo via volumetric graph-cuts and occlusion robust photo-consistency,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 12, pp. 2241–2246, 2007.
  23. A. Osman Ulusoy, M. J. Black, and A. Geiger, “Semantic multi-view stereo: Jointly estimating objects and voxels,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017.
  24. D. Cremers and K. Kolev, “Multiview stereo and silhouette consistency via convex functionals over convex domains,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 6, pp. 1161–1174, 2011.
  25. Z. Li, K. Wang, W. Zuo, D. Meng, and L. Zhang, “Detail-preserving and content-aware variational multi-view stereo reconstruction,” IEEE Trans. Image Process., vol. 25, no. 2, pp. 864–877, 2016.
  26. A. Locher, M. Perdoch, and L. Van Gool, “Progressive prioritized multi-view stereo,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016.
  27. M. Goesele, N. Snavely, B. Curless, H. Hoppe, and S. M. Seitz, “Multi-view stereo for community photo collections,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2007, pp. 1–8.
  28. J. L. Schonberger and J.-M. Frahm, “Structure-from-motion revisited,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016.
  29. J. L. Schönberger, E. Zheng, J.-M. Frahm, and M. Pollefeys, “Pixelwise view selection for unstructured multi-view stereo,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2016, pp. 501–518.
  30. Z. Xu, Y. Liu, X. Shi, Y. Wang, and Y. Zheng, “MARMVS: matching ambiguity reduced multiple view stereo for efficient large scale scene reconstruction,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2020, pp. 5980–5989.
  31. J. Zhang, S. Li, Z. Luo, T. Fang, and Y. Yao, “Vis-mvsnet: Visibility-aware multi-view stereo network,” Int. J. Comput. Vis. (IJCV), pp. 199–214, 2023.
  32. Z. Mi, D. Chang, and D. Xu, “Generalized binary search network for highly-efficient multi-view stereo,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2022, pp. 12 981–12 990.
  33. Y. Yao, Z. Luo, S. Li, T. Shen, T. Fang, and L. Quan, “Recurrent mvsnet for high-resolution multi-view stereo depth inference,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 5525–5534.
  34. Q. Xu and W. Tao, “Pvsnet: Pixelwise visibility-aware multi-view stereo network,” arXiv:2007.07714[cs.CV], July 2007, doi:10.48550/arXiv.2007.07714.
  35. C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “Patchmatch: A randomized correspondence algorithm for structural image editing,” ACM Trans. Graph., vol. 28, no. 3, p. 24, 2009.
  36. M. Bleyer, C. Rhemann, and C. Rother, “Patchmatch stereo - stereo matching with slanted support windows,” in British Mach. Vis. Conf. (BMVC), J. Hoey, S. J. McKenna, and E. Trucco, Eds., September 2011, pp. 1–11.
  37. S. Galliani, K. Lasinger, and K. Schindler, “Massively parallel multiview stereopsis by surface normal diffusion,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), December 2015.
  38. R. Gouveia, A. Spyropoulos, and P. Mordohai, “Confidence estimation for superpixel-based stereo matching,” in Int. Conf. 3D Vis. (3DV), 2015, pp. 180–188.
  39. T. Xue, A. Owens, D. Scharstein, M. Goesele, and R. Szeliski, “Multi-frame stereo matching with edges, planes, and superpixels,” Image Vis. Comput., vol. 91, 2019.
  40. W. Jung and J. Han, “Depth map refinement using super-pixel segmentation in multi-view systems,” in Proc. IEEE/CVF Conf. Consumer Elect. (ICCE).   IEEE, January 2021, pp. 1–5.
  41. N. Huang, Z. Huang, C. Fu, H. Zhou, Y. Xia, W. Li, X. Xiong, and S. Cai, “A multiview stereo algorithm based on image segmentation guided generation of planar prior for textureless regions of artificial scenes,” IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens., vol. 16, pp. 3676–3696, 2023.
  42. T. Schops, J. L. Schonberger, S. Galliani, T. Sattler, K. Schindler, M. Pollefeys, and A. Geiger, “A multi-view stereo benchmark with high-resolution images and multi-camera videos,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), July 2017.
  43. C. Strecha, W. von Hansen, L. Van Gool, P. Fua, and U. Thoennessen, “On benchmarking camera calibration and multi-view stereo for high resolution imagery,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2008, pp. 1–8.
  44. D. Cernea, “OpenMVS: Multi-View Stereo Reconstruction Library,” 2020, Accessed: Nov. 28, 2022. [Online]. Available: https://cdcseacave.github.io/openMVS
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.