Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NeBLa: Neural Beer-Lambert for 3D Reconstruction of Oral Structures from Panoramic Radiographs (2304.04027v6)

Published 8 Apr 2023 in eess.IV, cs.CV, and cs.LG

Abstract: Panoramic radiography (Panoramic X-ray, PX) is a widely used imaging modality for dental examination. However, PX only provides a flattened 2D image, lacking in a 3D view of the oral structure. In this paper, we propose NeBLa (Neural Beer-Lambert) to estimate 3D oral structures from real-world PX. NeBLa tackles full 3D reconstruction for varying subjects (patients) where each reconstruction is based only on a single panoramic image. We create an intermediate representation called simulated PX (SimPX) from 3D Cone-beam computed tomography (CBCT) data based on the Beer-Lambert law of X-ray rendering and rotational principles of PX imaging. SimPX aims at not only truthfully simulating PX, but also facilitates the reverting process back to 3D data. We propose a novel neural model based on ray tracing which exploits both global and local input features to convert SimPX to 3D output. At inference, a real PX image is translated to a SimPX-style image with semantic regularization, and the translated image is processed by generation module to produce high-quality outputs. Experiments show that NeBLa outperforms prior state-of-the-art in reconstruction tasks both quantitatively and qualitatively. Unlike prior methods, NeBLa does not require any prior information such as the shape of dental arches, nor the matched PX-CBCT dataset for training, which is difficult to obtain in clinical practice. Our code is available at https://github.com/sihwa-park/nebla.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Automatic segmentation of mandible in panoramic x-ray. Journal of Medical Imaging, 2(4): 044003–044003.
  2. Reconstruction of panoramic dental images through bézier function optimization. Frontiers in Bioengineering and Biotechnology, 8: 794.
  3. Generative and discriminative voxel modeling with convolutional neural networks. arXiv preprint arXiv:1608.04236.
  4. Brooks, S. L. 2009. CBCT dosimetry: orthodontic considerations. In Seminars in Orthodontics, volume 15, 14–18. Elsevier.
  5. Low-dose CT with a residual encoder-decoder convolutional neural network. IEEE transactions on medical imaging, 36(12): 2524–2535.
  6. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VIII 14, 628–644. Springer.
  7. Hounsfield unit.
  8. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, 605–613.
  9. Practical cone-beam algorithm. Josa a, 1(6): 612–619.
  10. Generative adversarial networks. Communications of the ACM, 63(11): 139–144.
  11. Single-image tomography: 3D volumes from 2D cranial X-rays. In Computer Graphics Forum, volume 37, 377–388. Wiley Online Library.
  12. Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5375–5384.
  13. Perceptual losses for real-time style transfer and super-resolution. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, 694–711. Springer.
  14. End-to-end convolutional neural network for 3D reconstruction of knee bones from bi-planar X-ray images. In Machine Learning for Medical Image Reconstruction: Third International Workshop, MLMIR 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings 3, 123–133. Springer.
  15. Beam hardening correction for X-ray computed tomography of heterogeneous natural materials. Computers & geosciences, 67: 49–61.
  16. The current situation and future prospects of simulators in dental education. Journal of Medical Internet Research, 23(4): e23635.
  17. X2teeth: 3d teeth reconstruction from a single panoramic radiograph. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part II 23, 400–409. Springer.
  18. Comparison of panoramic radiography and CBCT to identify maxillary posterior roots invading the maxillary sinus. Dentomaxillofacial Radiology, 45(6): 20160043.
  19. Deriving Hounsfield units using grey levels in cone beam computed tomography. Dentomaxillofacial Radiology, 39(6): 323–335.
  20. Max, N. 1995. Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, 1(2): 99–108.
  21. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4460–4470.
  22. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.
  23. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 165–174.
  24. Variability of dental cone beam CT grey values for density estimations. The British journal of radiology, 86(1021): 20120135–20120135.
  25. Virtual dental surgery as a new educational tool in dental school. Journal of cranio-maxillofacial surgery, 38(8): 560–564.
  26. Generating 3D faces using convolutional mesh autoencoders. In Proceedings of the European conference on computer vision (ECCV), 704–720.
  27. Relationship between Hounsfield unit in CT scan and gray scale in CBCT. Journal of dental research, dental clinics, dental prospects, 8(2): 107.
  28. Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3577–3586.
  29. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241. Springer.
  30. Oral-3d: Reconstructing the 3d structure of oral cavity from panoramic x-ray. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 566–573.
  31. Pixel2mesh: Generating 3d mesh models from single rgb images. In Proceedings of the European conference on computer vision (ECCV), 52–67.
  32. Essentials of dental radiography and radiology. Elsevier Health Sciences.
  33. Oral radiology-E-Book: Principles and interpretation. Elsevier Health Sciences.
  34. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 9621–9630.
  35. X2CT-GAN: reconstructing CT from biplanar X-rays with generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10619–10628.
  36. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4578–4587.
  37. Automatic reconstruction method for high-contrast panoramic image from dental cone-beam CT data. Computer methods and programs in biomedicine, 175: 205–214.
  38. Naf: Neural attenuation fields for sparse-view cbct reconstruction. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part VI, 442–452. Springer.
  39. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595.
  40. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2223–2232.
Citations (4)

Summary

We haven't generated a summary for this paper yet.