Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Depth from Focal Stack with Defocus Model for Camera-Setting Invariance (2202.13055v1)

Published 26 Feb 2022 in cs.CV

Abstract: We propose a learning-based depth from focus/defocus (DFF), which takes a focal stack as input for estimating scene depth. Defocus blur is a useful cue for depth estimation. However, the size of the blur depends on not only scene depth but also camera settings such as focus distance, focal length, and f-number. Current learning-based methods without any defocus models cannot estimate a correct depth map if camera settings are different at training and test times. Our method takes a plane sweep volume as input for the constraint between scene depth, defocus images, and camera settings, and this intermediate representation enables depth estimation with different camera settings at training and test times. This camera-setting invariance can enhance the applicability of learning-based DFF methods. The experimental results also indicate that our method is robust against a synthetic-to-real domain gap, and exhibits state-of-the-art performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuki Fujimura (6 papers)
  2. Masaaki Iiyama (6 papers)
  3. Takuya Funatomi (3 papers)
  4. Yasuhiro Mukaigawa (4 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.