Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lightweight Monocular Depth Estimation via Token-Sharing Transformer (2306.05682v1)

Published 9 Jun 2023 in cs.CV, cs.AI, cs.LG, cs.RO, and eess.IV

Abstract: Depth estimation is an important task in various robotics systems and applications. In mobile robotics systems, monocular depth estimation is desirable since a single RGB camera can be deployable at a low cost and compact size. Due to its significant and growing needs, many lightweight monocular depth estimation networks have been proposed for mobile robotics systems. While most lightweight monocular depth estimation methods have been developed using convolution neural networks, the Transformer has been gradually utilized in monocular depth estimation recently. However, massive parameters and large computational costs in the Transformer disturb the deployment to embedded devices. In this paper, we present a Token-Sharing Transformer (TST), an architecture using the Transformer for monocular depth estimation, optimized especially in embedded devices. The proposed TST utilizes global token sharing, which enables the model to obtain an accurate depth prediction with high throughput in embedded devices. Experimental results show that TST outperforms the existing lightweight monocular depth estimation methods. On the NYU Depth v2 dataset, TST can deliver depth maps up to 63.4 FPS in NVIDIA Jetson nano and 142.6 FPS in NVIDIA Jetson TX2, with lower errors than the existing methods. Furthermore, TST achieves real-time depth estimation of high-resolution images on Jetson TX2 with competitive results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dong-Jae Lee (8 papers)
  2. Jae Young Lee (9 papers)
  3. Hyounguk Shon (7 papers)
  4. Eojindl Yi (4 papers)
  5. Yeong-Hun Park (2 papers)
  6. Sung-Sik Cho (2 papers)
  7. Junmo Kim (90 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.