Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware CT-Projections from a Single X-ray (2202.01020v3)

Published 2 Feb 2022 in eess.IV and cs.CV

Abstract: Computed tomography (CT) is an effective medical imaging modality, widely used in the field of clinical medicine for the diagnosis of various pathologies. Advances in Multidetector CT imaging technology have enabled additional functionalities, including generation of thin slice multiplanar cross-sectional body imaging and 3D reconstructions. However, this involves patients being exposed to a considerable dose of ionising radiation. Excessive ionising radiation can lead to deterministic and harmful effects on the body. This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray. This is based on a novel architecture that builds from neural radiance fields, which learns a continuous representation of CT scans by disentangling the shape and volumetric depth of surface and internal anatomical structures from 2D images. Our model is trained on chest and knee datasets, and we demonstrate qualitative and quantitative high-fidelity renderings and compare our approach to other recent radiance field-based methods. Our code and link to our datasets are available at https://github.com/abrilcf/mednerf

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Abril Corona-Figueroa (3 papers)
  2. Jonathan Frawley (4 papers)
  3. Sam Bond-Taylor (10 papers)
  4. Sarath Bethapudi (2 papers)
  5. Hubert P. H. Shum (67 papers)
  6. Chris G. Willcocks (19 papers)
Citations (59)

Summary

  • The paper introduces MedNeRF, a novel model that reconstructs high-fidelity 3D CT images from sparse X-ray data, significantly reducing ionizing radiation exposure.
  • It leverages a generative adversarial framework with self-supervised discriminator architectures and perceptual loss metrics to disentangle volumetric depth from 2D projections.
  • Experiments using digitally reconstructed radiographs show superior performance with high PSNR, SSIM, and improved FID/KID metrics over traditional models.

MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware CT-Projections from a Single X-ray

The paper introduces MedNeRF, an advanced deep learning model designed to reconstruct CT projections from a limited number of X-ray views, potentially reducing patient exposure to harmful ionizing radiation typically required for conventional CT scans. Building upon the foundational concept of neural radiance fields (NeRF), the authors propose a novel architecture capable of synthesizing high-fidelity 3D-aware images by disentangling volumetric depth and anatomical shape from two-dimensional X-ray images. This significant advance in medical imaging leverages continuous representations to map the complete internal anatomy of an organ.

MedNeRF adopts a generative adversarial framework informed by the Generative Radiance Fields (GRAF) approach, addressing essential challenges inherent in medical imaging environments. Traditional NeRF applications are limited to static and controlled settings, often requiring masking to separate objects from backgrounds. However, MedNeRF innovatively adapts these principles to accommodate the complex and overlapping nature of anatomical structures inherent in clinical medical images. The proposed model bypasses traditional constraints by utilizing self-supervised learning to enhance the discriminator’s ability to guide synthesis, optimizing the mapping of sparse input data to 3D representations.

The authors introduce a method to train the model on digitally reconstructed radiographs (DRRs) instead of actual medical CT data. This training strategy not only circumvents the ethical and practical issues associated with patient radiation exposure but also facilitates the ease of dataset expansion without concerted ethical review processes. Their experiments utilize DRRs of knee and chest CT scans, achieving remarkable results in accuracy, as demonstrated by high peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) values, underscoring the effectiveness of their approach.

Furthermore, MedNeRF incorporates state-of-the-art self-supervised discriminator architectures that employ perceptual loss metrics (e.g., LPIPS) to enhance the high-fidelity reconstruction of internal anatomical structures. By integrating multiple discriminator heads with shared weighting through Data Augmentation Optimized for GANs (DAG), the robustness of adversarial training is significantly improved, countering issues such as mode collapse.

This paper makes several bold claims about the methodological advancements of MedNeRF. The authors assert that their framework substantially outperforms existing models like GRAF and pixelNeRF in rendering accurate and detailed medical images, as evidenced by superior FID and KID metrics. They emphasize the practical advantages of their system, which notably include reducing radiation exposure—a significant advantage in medical imaging.

The practical implications of MedNeRF extend to clinical scenarios where reducing radiation dosage is critically important, such as in pediatrics or in routine investigations where cumulative radiation exposure poses a long-term risk. The potential economic effects of integrating such technology include reduced costs associated with radiation shielding and exposure management. Moreover, theoretically, this development paves the way for future research on neural representations in healthcare, particularly in advancing methodologies for sparse-view reconstruction and their applications in diagnostic imaging.

In conclusion, MedNeRF represents a significant step forward in medical imaging technology, integrating sophisticated machine learning techniques with medical radiography to address urgent clinical needs. Future work may explore the integration of this technology in broader clinical practices and its adaptability to other imaging modalities or anatomical regions. The model's ability to synthesize comprehensive 3D structures from minimal input also presents opportunities to refine other forms of sparse data interpretation in medical AI applications.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub