Tomography-to-Geometry Pipeline
- Tomography-to-geometry pipelines are computational sequences that transform tomographic measurements into explicit geometric, topological, or structural representations using integrated statistical models and machine learning.
- They employ diverse methodologies—including classical inversion, Bayesian sampling, and neural networks—to calibrate imaging systems and reconstruct volumetric or mesh-based representations.
- These pipelines are pivotal in industrial CT, medical imaging, robotics, and non-destructive testing, delivering robust geometry extraction even under noisy or sparse data conditions.
A tomography-to-geometry pipeline is a computational sequence that takes tomographic measurements—such as radiographic projections, volumetric images, or 2D cross-sectional scans—and extracts explicit geometric, topological, or structural quantities with scientific or diagnostic utility. This class of pipelines encompasses schemes for reconstructing explicit object boundaries, segmenting organs, estimating topology/geometry parameters, or generating mesh and point-cloud representations from raw tomographic data. Modern pipelines incorporate modular integration of statistical inference, differentiable physics, explicit geometric modeling, and machine learning, enabling robust geometry extraction even in the presence of uncertain system parameters or limited/binary measurement regimes.
1. Mathematical Models and Problem Formulations
Pipelines are grounded in formal forward models describing the image-formation process, noise, and measurement geometry. The canonical forward projector for transmission CT is
where is the measured sinogram, the unknown vectorized attenuation field (or binary occupancy), the system matrix for geometry , and additive noise, typically Gaussian or Poissonian. encodes all geometric parameters—positions, angles, and poses of the source(s), object(s), and detector(s).
In silhouette tomography, the transform is highly quantized: , mapping to binary ray-hit patterns, with returning the set of rays passing through any occupied region. This destroys most amplitude information and introduces extreme ill-posedness (Bell et al., 11 Feb 2024).
Point-source or particle-based models, as in “Geometric Invariants for Sparse Unknown View Tomography” (Zehni et al., 2018), abstract the object as a finite sum of Dirac masses, reducing the inverse problem to inferring the spatial configuration of discrete points from unknown-view projections via rotation-invariant feature extraction.
Bayesian models extend these formalisms by imposing probabilistic priors on , , and hyperparameters (e.g., for noise precision, for image prior strength), with joint posteriors sampled via Metropolis-within-Gibbs or other MCMC schemes (Pedersen et al., 2022).
2. Geometry Parameter Estimation and System Calibration
Correct geometric parameterization is critical; errors induce nontrivial artifacts such as double-edges and severe misalignments. Classical pipelines in industrial CT employ explicit geometric calibration objects (“phantoms”) with known high-contrast, asymmetric patterns to avoid ambiguities (Senchukova et al., 2022). Forward and backprojected images under hypothesized geometry vectors are compared to ground-truth reference images via correlation metrics; global optimization (e.g., differential evolution) searches for the geometry maximizing image similarity.
The process in (Senchukova et al., 2022) includes:
- Defining a parameter vector (e.g., initial angle, detector radius, source/detector shifts and tilt).
- Acquiring phantom projections (full or sparse angles).
- Maximizing the normalized cross-correlation between filtered backprojections and reference image .
- Using evolutionary, nonlocal optimizers for robustness, even under sparsity.
Alternative approaches such as the alternating minimization paradigm in (Huynh et al., 2021) jointly solve for via a sequence of regularized image inversions and bounded nonlinear geometry refinements (e.g., per-view quasi-Newton or implicit filtering methods), supporting parallelization and acceleration with Anderson acceleration or crossed secant schemes.
In the Bayesian framework (Pedersen et al., 2022), parameter uncertainty is quantified explicitly via posterior sampling, yielding credible intervals for geometry and pointwise statistics for the induced geometric uncertainty in the reconstructed volume.
3. Image Reconstruction and Geometry Extraction
After geometric parameters are estimated or marginalized, classical and modern pipelines reconstruct the underlying volumetric or geometric field. Reconstruction methods depend on data, prior, and problem regime:
- Analytical: Filtered Backprojection (FBP) is employed post-calibration for dense and well-posed cases (Senchukova et al., 2022).
- Bayesian/regularized: MAP estimation with tailored priors, e.g., Tikhonov or edge-preserving Cauchy differences (Pedersen et al., 2022, Senchukova et al., 2022), solved via gradient or quasi-Newton methods.
- Alternating minimization between and for unknown geometry (Huynh et al., 2021).
- Neural fields: Hierarchical volumetric representations with local neural feature grids and global MLP decoders optimize an explicit Beer–Lambert forward model (as in NeAT (Rückert et al., 2022)), producing adaptive high-fidelity 3D density fields under sparse and limited-angle conditions.
In silhouette tomography, analytic maximal solutions generate loose geometric hulls, but deep U-Nets trained on backprojected silhouette data robustly recover 3D occupancy maps, effectively learning the mapping from highly quantized input to geometric structure (Bell et al., 11 Feb 2024).
For point-source models, geometric invariants (mean and autocorrelation of Fourier-transformed projections) are used to reconstruct unlabeled radial and pairwise distance histograms, which are then solved as unassigned distance geometry problems (uDGP) via constrained nonconvex optimization (Zehni et al., 2018).
4. Machine Learning and Neural Rendering Integration
Modern pipelines increasingly leverage deep neural networks for both tomographic inversion and geometry extraction:
- Learning-based internal geometry extraction: Deep U-Nets or volumetric MLPs are trained to map raw or backprojected tomograms to explicit segmentations, occupancy, or geometric quantities (Bell et al., 11 Feb 2024, Engelmann et al., 2023).
- Hierarchical neural adaptive tomography (NeAT, (Rückert et al., 2022)) combines explicit octree volumetric feature grids, global MLP decoding, and adaptive subdivision to yield efficient, accurate surface meshes. Optimization targets reprojection error with TV and boundary consistency regularizers, with rapid GPU-based convergence on challenging sparse-view CT.
- Visual error tomography (VET, (Franke et al., 2023)) propagates per-pixel neural rendering errors through a learned CT inversion (NeAT module) to reconstruct 3D error volumes, spawning new point-cloud elements in high-error regions and iteratively cleaning them via differentiable renderers.
Neural implicit fields, with positional encodings and latent codes (shape, appearance) as in TomoGRAF (Xu et al., 12 Nov 2024), enable joint modeling of CT geometry, X-ray physics, and density fields, outperforming NeRF-style or GAN-based baselines in ultra-sparse view settings.
Particle- and simulator-based pipelines (TopoGaussian, (Xiong et al., 16 Mar 2025)) couple Gaussian splatting for multi-view initialization with differentiable particle-based simulators, leveraging adjoint backpropagation through rigid, soft, and actuated simulations for topology optimization; alternative representations (per-particle indicators, DeepSDF, quadratic primitives) enable flexibility in encoding internal structure.
5. Quantitative Validation, Uncertainty, and Robustness
Tomography-to-geometry pipelines are validated across well-defined quantitative metrics:
- Segmentation accuracy (Dice, AUC, MAE) as in choroidal OCT segmentation (Choroidalyzer, (Engelmann et al., 2023)).
- Geometric parameter error and image error under noisy/sparse regimes, supporting uncertainty quantification from posterior MCMC samples (Pedersen et al., 2022).
- Completion and rendering quality (LPIPS, SSIM, PSNR) for neural rendering (Franke et al., 2023).
- Robustness to initial misalignment, measurement noise, and measurement sparsity, with Bayesian and learning-based pipelines consistently outperforming classical methods.
- Particle-based approaches reduce volumetric generation time by on average versus mesh-voxel pipelines and demonstrate up to improvement in center-of-mass accuracy in physical tasks (Xiong et al., 16 Mar 2025).
Uncertainty quantification is native to Bayesian/MCMC frameworks, with credible intervals for both geometry and image. Ensemble statistics over posterior samples propagate geometry uncertainty into the reconstructed domain (Pedersen et al., 2022).
6. Broader Classes and Applications
Tomography-to-geometry pipelines serve a wide array of scientific and engineering domains:
- Non-destructive testing (industrial sawmill CT, (Senchukova et al., 2022))
- Low-dose/portable medical imaging (Pedersen et al., 2022, Huynh et al., 2021, Xu et al., 12 Nov 2024)
- Ophthalmic biomarker extraction in OCT (thickness, area, vascular index) (Engelmann et al., 2023)
- Geometric design from sparse, ambiguous, or binary data (silhouette, unknown-view tomography, (Bell et al., 11 Feb 2024, Zehni et al., 2018))
- 3D inference for robotics, manufacturing, and soft mechanism design (TopoGaussian, (Xiong et al., 16 Mar 2025))
- Novel view synthesis and neural rendering geometry completion (Franke et al., 2023)
A common trait is the movement toward robust, end-to-end differentiable systems capable of handling geometric uncertainty, nonideal data, and domain-specific physics, integrating the latest advances in optimization, machine learning, and probabilistic inference.
7. Summary Table: Representations and Methods
| Paper / Pipeline | Geometry Model | Inference Approach |
|---|---|---|
| (Pedersen et al., 2022) | Parametric; Bayesian | Metropolis-within-Gibbs MCMC |
| (Senchukova et al., 2022) | 5D geometric parameters | Cross-correlation, DE, Bayesian |
| (Huynh et al., 2021) | Per-view angle/distances | Alternating minimization, AA |
| (Rückert et al., 2022) | Neural octree, density | Volumetric neural field, Adam |
| (Xu et al., 12 Nov 2024) | MLP implicit, X-ray sim | GAN+LPIPS, TV-loss, AdamW |
| (Xiong et al., 16 Mar 2025) | Particle/implicit | Differentiable physics/optim. |
| (Zehni et al., 2018) | Point source, invariants | Bessel invariants, uDGP, opt. |
| (Engelmann et al., 2023) | Segmented tissue | Deep UNet, pixelwise BCE |
| (Franke et al., 2023) | Point cloud, neural | VET module, iterative cleaning |
| (Bell et al., 11 Feb 2024) | Binary occupancy | Analytic + supervised U-Net |
This diversity of mathematical formalisms and algorithmic approaches underlines the versatility and maturity of tomography-to-geometry pipelines in computational imaging research.