Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer (2208.03543v1)

Published 6 Aug 2022 in cs.CV

Abstract: Self-supervised monocular depth estimation is an attractive solution that does not require hard-to-source depth labels for training. Convolutional neural networks (CNNs) have recently achieved great success in this task. However, their limited receptive field constrains existing network architectures to reason only locally, dampening the effectiveness of the self-supervised paradigm. In the light of the recent successes achieved by Vision Transformers (ViTs), we propose MonoViT, a brand-new framework combining the global reasoning enabled by ViT models with the flexibility of self-supervised monocular depth estimation. By combining plain convolutions with Transformer blocks, our model can reason locally and globally, yielding depth prediction at a higher level of detail and accuracy, allowing MonoViT to achieve state-of-the-art performance on the established KITTI dataset. Moreover, MonoViT proves its superior generalization capacities on other datasets such as Make3D and DrivingStereo.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Chaoqiang Zhao (17 papers)
  2. Youmin Zhang (26 papers)
  3. Matteo Poggi (71 papers)
  4. Fabio Tosi (43 papers)
  5. Xianda Guo (23 papers)
  6. Zheng Zhu (200 papers)
  7. Guan Huang (75 papers)
  8. Yang Tang (77 papers)
  9. Stefano Mattoccia (51 papers)
Citations (147)

Summary

We haven't generated a summary for this paper yet.