Papers
Topics
Authors
Recent
2000 character limit reached

3D Foundation Models Overview

Updated 1 December 2025
  • 3D Foundation Models are advanced neural networks pre-trained on massive 3D datasets, enabling robust feature extraction and versatile application across domains.
  • They employ transformer-based architectures, 3D CNNs, and diffusion models to process diverse modalities including point clouds, voxels, meshes, and multi-view images.
  • These models excel in transfer learning, achieving high performance in tasks like CT scan segmentation, robotic policy learning, and urban modeling with state-of-the-art metrics.

Three-dimensional Foundation Models (3DFMs) refer to large-scale neural networks pretrained—often in a self-supervised fashion—on massive 3D datasets to serve as general-purpose feature extractors, predictors, or generators across diverse 3D tasks. These models are characterized by transferability, representational versatility, and adaptability for downstream applications ranging from perception and geometric reasoning to generation and decision-making in domains such as robotics, radiology, geoscience, urban modeling, and computer graphics.

1. Architectural Principles and Input Modalities

3DFMs are grounded in the property of modality-specific pretraining. Typical input domains include point clouds, volumetric grids (e.g. voxels), polygon meshes, or multi-view image stacks. Most modern 3DFMs employ transformer-based architectures—such as ViT, OPT, or specialized encoder-decoder blocks—but 3D convolutional networks and diffusive models also appear for volumetric data.

Examples include:

Architectural strategies vary according to input type: point clouds require permutation-invariant processing, meshes demand sequence-ordering schemes and explicit–implicit embedding (NeurCF), while volumetric grids require hierarchical lifting of multi-view (often 2D) features into 3D grids—see Egocentric Voxel Lifting for wearable devices (Straub et al., 14 Jun 2024). Compression and distillation methods (e.g. Foundry’s SuperToken mechanism) are essential for fitting 3DFMs on edge devices (Letellier et al., 25 Nov 2025).

2. Pretraining Objectives and Algorithms

Most 3DFMs adopt self-supervised or weakly-supervised pretraining on large, unlabeled datasets to maximize generality. Standard objectives include:

  • Contrastive Learning: InfoNCE-style losses align representations of local patches or multi-modal pairs (e.g. image–point–text triplets), as in CT-FM for radiology (Pai et al., 15 Jan 2025) and various 3D–2D dual-encoder frameworks (Thengane et al., 30 Jan 2025).
  • Masked Reconstruction: MAE and its variants mask a subset of input (points, slices) and train the model to reconstruct, driving contextual dependence and global understanding (SFM for seismic; (Sheng et al., 2023)).
  • Diffusion Modeling: Conditional denoising frameworks (DDPM), implemented by transformers, generate action trajectories or geometric fields from noisy observations (FP3 (Yang et al., 11 Mar 2025), Geo4D).
  • Generative Auto-Regressive Modeling: MeshXL linearizes mesh data into token sequences for next-token prediction, achieving high coverage and geometric fidelity (Chen et al., 31 May 2024).

Model scaling studies uniformly show improved metrics with increasing parameter count and dataset size; for example, performance gains in MeshXL from 125M to 1.3B parameters (Chen et al., 31 May 2024), or stability/accuracy improvements with 148,000 CT volumes in CT-FM (Pai et al., 15 Jan 2025).

3. Downstream Tasks and Benchmark Achievements

3DFMs are engineered as generalist models for transfer across heterogeneous tasks and domains:

  • Medical Imaging: Segmentation (Dice: 0.8981), classification, triage, retrieval, and semantic clustering from CT scans (CT-FM) (Pai et al., 15 Jan 2025).
  • Robotics: Policy fine-tuning in manipulation achieves 90–100% success in seen settings and ∼80% in wild scenes, with minimal data (80 demos) (Yang et al., 11 Mar 2025).
  • Geoscience: Seismic facies classification, geobody identification (mIoU up to 0.7980), denoising, inversion, and interpolation (MS-SSIM up to 0.9547) from SFM (Sheng et al., 2023).
  • Urban Modeling: Large-scale building mesh reconstruction, semantic segmentation, wireframe extraction, and city-scale evaluation on BuildingWorld (5M+ buildings, simulated/real ALS) (Huang et al., 9 Nov 2025).
  • Scene Segmentation: Zero-training paradigms (PointSeg) leveraging 2D models (SAM) yield ScanNet mAP=35.6% without 3D training (He et al., 11 Mar 2024).
  • Egocentric Wearable AI: EFM3D defines snippet- and sequence-level mAP (EVL: 0.40–0.75), mesh accuracy (0.182 m on real scenes) (Straub et al., 14 Jun 2024).
  • Novel View Synthesis and Dense Reconstruction: VGGT-X approaches COLMAP renders within 1 dB PSNR and 0.05 AUC@30, scalable to 1,000+ images (Liu et al., 29 Sep 2025). LoRA3D self-calibration enables up to 88% improvement in multi-view pose and rendering (Lu et al., 10 Dec 2024).

Unified benchmarks like E3D-Bench offer systematic evaluation across five tasks (sparse-view depth, video depth, multi-view pose, 3D reconstruction, novel view synthesis) and diverse domains, fostering reproducible research (Cong et al., 2 Jun 2025).

4. Transferability, Adaptation, and Compression

Generalist 3DFMs can be adapted rapidly to new downstream tasks or domains:

  • Low-Rank Adaptation: LoRA3D fine-tunes only ≈18 MB of adapter parameters per scene, delivering <5 min self-calibration (Lu et al., 10 Dec 2024).
  • SuperToken Distillation: Foundry compresses full ViT teacher models to students with s≪c tokens, maintaining performance (e.g. Foundry s=16 traces teacher accuracy within 0.5%, mIoU loss <2) and drastically reducing compute (Letellier et al., 25 Nov 2025).
  • Layer/Token Pruning: VGGT-X and others utilize BFloat16, chunked attention, and selective layer output for massive vRAM savings (Liu et al., 29 Sep 2025).
  • Lightweight Alignment Schemes: Fine-tuning backbone bias terms unlocks robust extreme-view pose reasoning at marginal parameter cost, improving median rotation error to ≈10–15° (Zhang et al., 27 Nov 2025).

Edge deployment is increasingly feasible; runtime, memory, and FLOP reductions are addressed by gating, distillation, and modularized architectures.

5. Multimodal and Cross-Task Integration

3DFMs leverage multimodal pretraining and inference to enable rich spatial reasoning:

  • Vision–Language–Geometry Alignment: Triplet objectives align image, text, and 3D encoders (CLIP, GPT4Point), facilitating open-vocabulary classification, captioning, and ground instruction transfer (Thengane et al., 30 Jan 2025).
  • Hybrid Representations: Scene analogy pipelines fuse sparse CLIP-based graphs with dense 3D feature fields (PartField), enabling zero-shot analogy and trajectory transfer in robotics (Kim et al., 27 Oct 2025).
  • Egocentric Volumetric Lifting: EVL injects 2D semantic features and SLAM geometry into a unified 3D backbone for wearable devices (Straub et al., 14 Jun 2024).

Continuous feature fields and open-vocabulary pipelines represent a significant advance in cross-modal spatial understanding and transfer.

6. Evaluation, Generalization, and Benchmarks

Standardized metrics across 3DFMs include Chamfer Distance, mIoU, Dice, PSNR, SSIM, LPIPS, absolute translation/rotation error, coverage, and completeness, tracked on open-source benchmarks:

Task/Metric Leader(s) Example Score(s)
Whole-body CT Dice CT-FM 0.8981 (117 labels)
Robotics Success (in-wild) FP3 ∼80% (unseen scenes/objects)
Seismic mIoU (Facies) SFM up to 0.7980 (fine-tuned)
Scene mAP (Segmentation) PointSeg 35.6% (ScanNet, zero-shot)
MeshXL Coverage (Chair) MeshXL >50% (Objaverse)
EFM3D Object Detection mAP EVL up to 0.75
NVS PSNR gap to COLMAP VGGT-X <1 dB

Repeated validation across held-out, out-of-domain scenes, and difficulty scales (e.g. MegaUnScene (Zhang et al., 27 Nov 2025), ULTRRA (Cong et al., 2 Jun 2025), Cyber City (Huang et al., 9 Nov 2025)) demonstrates resilience but also exposes limitations in scale, metric accuracy, and open-world generalization.

7. Challenges, Limitations, and Future Directions

Key challenges for 3DFMs:

Future work will likely address efficiency (sparse attention, quantization), metric-scale recovery (absolute depth), multimodal fusion (2D+3D+text), scalable generative modeling (diffusion, auto-regressive hybrids), and robust open-taxonomy learning.


In summary, 3D Foundation Models represent a technical convergence of large-scale self-supervised learning, geometric deep learning, and multimodal integration, yielding versatile, generalist backbones with broad applicability and state-of-the-art results across perception, reasoning, generation, and decision-making tasks in spatial domains. Rigorous benchmarking and ongoing advances in adaptation, compression, and transfer are central to further progress.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to 3D Foundation Models (3DFMs).