Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 100 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 29 tok/s
GPT-5 High 29 tok/s Pro
GPT-4o 103 tok/s
GPT OSS 120B 480 tok/s Pro
Kimi K2 215 tok/s Pro
2000 character limit reached

Total Variation-Based Dense Depth from Multi-Camera Array (1711.07719v1)

Published 21 Nov 2017 in cs.CV

Abstract: Multi-Camera arrays are increasingly employed in both consumer and industrial applications, and various passive techniques are documented to estimate depth from such camera arrays. Current depth estimation methods provide useful estimations of depth in an imaged scene but are often impractical due to significant computational requirements. This paper presents a novel framework that generates a high-quality continuous depth map from multi-camera array/light field cameras. The proposed framework utilizes analysis of the local Epipolar Plane Image (EPI) to initiate the depth estimation process. The estimated depth map is then processed using Total Variation (TV) minimization based on the Fenchel-Rockafellar duality. Evaluation of this method based on a well-known benchmark indicates that the proposed framework performs well in terms of accuracy when compared to the top-ranked depth estimation methods and a baseline algorithm. The test dataset includes both photorealistic and non-photorealistic scenes. Notably, the computational requirements required to achieve an equivalent accuracy are significantly reduced when compared to the top algorithms. As a consequence, the proposed framework is suitable for deployment in consumer and industrial applications.

Citations (2)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.