Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Depth Estimation from a Single Optical Encoded Image using a Learned Colored-Coded Aperture (2309.08033v1)

Published 14 Sep 2023 in cs.CV

Abstract: Depth estimation from a single image of a conventional camera is a challenging task since depth cues are lost during the acquisition process. State-of-the-art approaches improve the discrimination between different depths by introducing a binary-coded aperture (CA) in the lens aperture that generates different coded blur patterns at different depths. Color-coded apertures (CCA) can also produce color misalignment in the captured image which can be utilized to estimate disparity. Leveraging advances in deep learning, more recent works have explored the data-driven design of a diffractive optical element (DOE) for encoding depth information through chromatic aberrations. However, compared with binary CA or CCA, DOEs are more expensive to fabricate and require high-precision devices. Different from previous CCA-based approaches that employ few basic colors, in this work we propose a CCA with a greater number of color filters and richer spectral information to optically encode relevant depth information in a single snapshot. Furthermore, we propose to jointly learn the color-coded aperture (CCA) pattern and a convolutional neural network (CNN) to retrieve depth information by using an end-to-end optimization approach. We demonstrate through different experiments on three different data sets that the designed color-encoding has the potential to remove depth ambiguities and provides better depth estimates compared to state-of-the-art approaches. Additionally, we build a low-cost prototype of our CCA using a photographic film and validate the proposed approach in real scenarios.

Citations (1)

Summary

We haven't generated a summary for this paper yet.