Enhancement of 3D Camera Synthetic Training Data with Noise Models (2402.16514v1)
Abstract: The goal of this paper is to assess the impact of noise in 3D camera-captured data by modeling the noise of the imaging process and applying it on synthetic training data. We compiled a dataset of specifically constructed scenes to obtain a noise model. We specifically model lateral noise, affecting the position of captured points in the image plane, and axial noise, affecting the position along the axis perpendicular to the image plane. The estimated models can be used to emulate noise in synthetic training data. The added benefit of adding artificial noise is evaluated in an experiment with rendered data for object segmentation. We train a series of neural networks with varying levels of noise in the data and measure their ability to generalize on real data. The results show that using too little or too much noise can hurt the networks' performance indicating that obtaining a model of noise from real scanners is beneficial for synthetic data generation.
- Noise modelling in time-of-flight sensors with application to depth noise removal and uncertainty estimation in three-dimensional measurement. IET Computer Vision, 9(6):967–977, 2015.
- Markerless motion capture using multiple color-depth sensors. In VMV, pages 317–324, 2011.
- Shake’n’sense: Reducing structured light interference when multiple depth cameras overlap. Proc. Human Factors in Computing Systems (ACM CHI). NY, USA., 14, 2012.
- J. Canny. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, PAMI-8(6):679–698, 1986.
- A. Chatterjee and V. M. Govindu. Noise in structured-light stereo depth cameras: Modeling and its applications. arXiv:1505.01936, 2015.
- Use of the hough transformation to detect lines and curves in pictures. Communications of the ACM, 15(1):11–15, 1972.
- Sim2real image translation to improve a synthetic dataset for a bin picking task. In 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), pages 1–7, 2022.
- Study on the use of microsoft kinect for robotics applications. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, pages 1280–1288. IEEE, 2012.
- D. Falie and V. Buzuloiu. Noise characteristics of 3d time-of-flight cameras. In 2007 International Symposium on Signals, Circuits and Systems, volume 1, pages 1–4. IEEE, 2007.
- Distance-varying illumination and imaging techniques for depth mapping, U.S. Patent US20100290698A1, July 2013.
- Towards deep learning-based 6d bin pose estimation in 3d scan. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, pages 545–552. INSTICC, SciTePress, 2022.
- Blensor: Blender sensor simulation toolbox. In Advances in Visual Computing: 7th International Symposium, ISVC 2011, Las Vegas, NV, USA, September 26-28, 2011. Proceedings, Part II 7, pages 199–208. Springer, 2011.
- Time-of-flight cameras: principles, methods and applications. Springer Science & Business, 2012.
- K. Khoshelham and S. O. Elberink. Accuracy and resolution of kinect depth data for indoor mapping applications. sensors, 12(2):1437–1454, 2012.
- Correction of afm data artifacts using a cnn trained with synthetically generated data. Ultramicroscopy, 246:113666, 2023.
- Methods and apparatus for superpixel modulation, U.S. Patent US10965891B2, March 2021.
- Characterizations of noise in kinect depth images: A review. IEEE Sensors journal, 14(6):1731–1740, 2014.
- Modeling kinect sensor noise for improved 3d reconstruction and tracking. In 2nd International Conference on 3D imaging, modeling, processing, visualization & transmission, pages 524–530. IEEE, 2012.
- U-net: Convolutional networks for biomedical image segmentation. In N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. Springer International Publishing.
- Kinect range sensing: Structured-light versus time-of-flight kinect. Computer vision and image understanding, 139:1–20, 2015.
- Image processing, analysis, and machine vision. Cengage Learning, 2014.
- Evaluation of the azure kinect and its comparison to kinect v1 and kinect v2. Sensors, 21(2):413, 2021.
- Training deep networks with synthetic data: Bridging the reality gap by domain randomization, 2018.
- Comparison of ipad pro®’s lidar and truedepth capabilities with an industrial 3d scanning solution. Technologies, 9(2):25, 2021.
- O. Wasenmüller and D. Stricker. Comparison of kinect v1 and v2 depth images in terms of accuracy and precision. In ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Part II 13, pages 34–45. Springer, 2017.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.