Mobile AR Depth Estimation: Challenges & Prospects -- Extended Version (2310.14437v1)
Abstract: Metric depth estimation plays an important role in mobile augmented reality (AR). With accurate metric depth, we can achieve more realistic user interactions such as object placement and occlusion detection. While specialized hardware like LiDAR demonstrates its promise, its restricted availability, i.e., only on selected high-end mobile devices, and performance limitations such as range and sensitivity to the environment, make it less ideal. Monocular depth estimation, on the other hand, relies solely on mobile cameras, which are ubiquitous, making it a promising alternative for mobile AR. In this paper, we investigate the challenges and opportunities of achieving accurate metric depth estimation in mobile AR. We tested four different state-of-the-art monocular depth estimation models on a newly introduced dataset (ARKitScenes) and identified three types of challenges: hard-ware, data, and model related challenges. Furthermore, our research provides promising future directions to explore and solve those challenges. These directions include (i) using more hardware-related information from the mobile device's camera and other available sensors, (ii) capturing high-quality data to reflect real-world AR scenarios, and (iii) designing a model architecture to utilize the new information.
- Apple. 2017. https://developer.apple.com/augmented-reality/.
- ARKitScenes - A Diverse Real-World Dataset for 3D Indoor Scene Understanding Using Mobile RGB-D Data. In NeurIPS Datasets and Benchmarks Track.
- LocalBins: Improving Depth Estimation By Learning Local Distributions. In ECCV.
- ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth. arXiv:2302.12288 ([n. d.]). https://doi.org/10.48550/ARXIV.2302.12288
- MiDaS v3.1–A Model Zoo for Robust Monocular Relative Depth Estimation. arXiv:2307.14460 (2023).
- Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild. In CVPR.
- DIML/CVL RGB-D Dataset: 2M RGB-D Images of Natural Indoor and Outdoor Scenes. arXiv: 2110.11590 (2021).
- ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. In CVPR.
- CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth. In CVPR.
- AdaBins: Depth Estimation Using Adaptive Bins. In CVPR.
- Deep Depth from Focal Stack with Defocus Model for Camera-Setting Invariance. arXiv:2202.13055 (2022).
- Toward Scalable and Controllable AR Experimentation. In ImmerCom.
- Vision meets Robotics: The KITTI Dataset. IJRR (2013).
- Towards Zero-Shot Scale-Aware Monocular Depth Estimation. In ICCV.
- LiDAR Depth Completion Using Color-Embedded Information via Knowledge Distillation. IEEE Transactions on Intelligent Transportation Systems (2022).
- Intel. 2023. https://www.intelrealsense.com/wp-content/uploads/2023/07/Intel-RealSense-D400-Series-Datasheet-July-2023.pdf.
- Focus on Defocus: Bridging the Synthetic to Real Domain Gap for Depth Estimation. In CVPR.
- Pushmeet Kohli Nathan Silberman, Derek Hoiem and Rob Fergus. 2012. Indoor Segmentation and Support Inference from RGBD Images. In ECCV.
- Vision Transformers for Dense Prediction. In ICCV. https://doi.org/10.1109/ICCV48922.2021.01196
- Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer. TPAMI (2020).
- SimpleRecon: 3D Reconstruction Without 3D Convolutions. In ECCV.
- A Benchmark for the Evaluation of RGB-D SLAM Systems. In IROS.
- PhoneDepth: A Dataset for Monocular Depth Estimation on Mobile Devices. In CVPRW.
- Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision. In ICCV.
- Toward Practical Monocular Indoor Depth Estimation. In CVPR.
- Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image. (2023).
- MobiDepth: Real-Time Depth Estimation Using on-Device Dual Cameras (MobiCom).
- InDepth: Real-Time Depth Inpainting for Mobile Augmented Reality. IMWUT (2022).