Papers
Topics
Authors
Recent
Search
2000 character limit reached

Self-Supervised Physics-Guided Deep Learning

Updated 15 January 2026
  • Self-supervised physics-guided deep learning is a framework that integrates physical models within deep networks to address inverse problems when labeled data is scarce.
  • It leverages unrolled network architectures that alternate between learned regularization and physics-based consistency, achieving performance comparable to supervised methods in applications like MRI and simulation.
  • The approach employs self-supervised loss designs that partition measurements to enforce physical consistency, improving reliability and generalization in high-stakes scientific and engineering tasks.

A self-supervised physics-guided deep learning framework is a class of machine learning methodologies that integrates known physical models directly into the training and inference processes of deep neural networks, while utilizing only (possibly undersampled or unlabeled) measured data by designing tasks or losses that do not require ground-truth outputs. This paradigm enables learning robust mappings for inverse problems, scientific simulation, or parameter identification when labeled or fully-sampled reference data are unavailable. Both the model architecture and loss functions are structured so that the network’s outputs are constrained or regularized by physics-based priors, operators, or governing equations, while self-supervision is achieved by reprojecting predictions via the physical model and enforcing consistency on held-out measurements or surrogate pretext tasks. The approach spans imaging (e.g., MRI), computational physics, scientific video prediction, material properties, and surrogate modeling.

1. Fundamental Principles and Core Problem Formulations

Self-supervised physics-guided deep learning frameworks formulate the learning or inference process as a (potentially nonlinear or nonconvex) inverse problem, incorporating physical forward models either as explicit algorithmic operators or as loss constraints. A canonical example is accelerated MRI, where the physics is a linear encoding operator EE (including Fourier transform, sampling mask, and coil sensitivities), modeling the acquisition as

yΩ=EΩx+ny_\Omega = E_\Omega x + n

where xx is the object to reconstruct, yΩy_\Omega are measurements on acquired frequency locations Ω\Omega, and nn is measurement noise. The corresponding inverse problem is often cast as a regularized least-squares minimization,

x^=argminxyΩEΩx22+R(x)\hat{x} = \arg\min_x \|y_\Omega - E_\Omega x\|_2^2 + R(x)

with R(x)R(x) a regularizer expressing prior knowledge. In the broader context (materials, fluid flow, mechanics), analogous formulations embed analytic, PDE-based, or surrogate-physics branches; for example, the SPINN model augments a neural estimator of the Nusselt number with an embedded semiempirical physics formula (Pirayeshshirazinezhad, 7 Sep 2025).

2. Model Architectures: Unrolled Solvers, Physics Modules, and Self-Supervised Losses

Most frameworks instantiate the solution of the inverse problem with a learnable, "unrolled" network architecture, which mimics a fixed number of steps in an iterative physics-based algorithm. Each block or iteration alternates between:

  • Learned regularization/proximal step: Nonlinear CNN denoisers or graph neural networks that encode learned structural or semantic priors (e.g., ResNet, U-Net, Implicit Neural Representations (INR) as in UnrollINR (Xu et al., 8 Oct 2025)).
  • Physics-based data consistency step: Exact or approximate enforcement of the known forward model (e.g., by solving quadratic subproblems with conjugate gradient, or propagating through differentiable simulation engines (Kandukuri et al., 2020)).

The network is trained with self-supervised losses that leverage only acquired (incomplete) measurements. In MRI, the measurements are randomly partitioned per training sample into two disjoint sets: a "data-consistency" subset Θ\Theta supplied to the DC blocks, and a "loss" or "validation" subset Λ\Lambda held out from the network and reserved exclusively for the loss, such that

Loss:L(θ)=1Ni(yΛi,EΛif(yΘi;EΘi;θ))\text{Loss:} \quad L(\theta) = \frac{1}{N} \sum_i \ell(y^i_\Lambda, E^i_\Lambda f(y^i_\Theta; E^i_\Theta; \theta))

with \ell typically a hybrid 1/2\ell_1/\ell_2 metric or normalized error (Yaman et al., 2019, Yaman et al., 2019, Yaman et al., 2020).

Table: Example MRI Unrolled Network Elements

Block Description References
Regularization Step ResNet or INR proximal operator (Xu et al., 8 Oct 2025)
Data Consistency Step (EΘHEΘ+μI)1(EΘHyΘ+μz)(E_\Theta^HE_\Theta + \mu I)^{-1}(E_\Theta^H y_\Theta + \mu z) (Yaman et al., 2019)
Physics-guided loss Self-supervised k-space or image domain consistency (Yaman et al., 2019)
Multi-task or pretext learning Physics-informed microproperty prediction (Fu et al., 2024)

In non-imaging domains, physics modules may include differentiable physics engines (Kandukuri et al., 2020), energy-minimization or Helmholtz solvers (Wang et al., 2024), or analytic surrogates as in SPINN (Pirayeshshirazinezhad, 7 Sep 2025).

3. Self-Supervision Protocols and Physical Consistency Losses

Self-supervised loss design is central to this paradigm. Rather than regressing to ground-truth targets, loss is computed such that the network’s outputs, when passed through the forward physics model, match observed measurements withheld from the main data-consistency path. For MRI and similar linear inverse problems, this typically entails:

  • Selecting ΘΩ\Theta \subset \Omega for network input, Λ=ΩΘ\Lambda = \Omega \setminus \Theta for loss; empirical evidence favors ρ=Λ/Ω0.30.4\rho = |\Lambda|/|\Omega| \approx 0.3{-}0.4 with variable density (Yaman et al., 2019).
  • Defining the loss on held-out measurements as

(u,v)=uv2u2+uv1u1\ell(u, v) = \frac{\|u-v\|_2}{\|u\|_2} + \frac{\|u-v\|_1}{\|u\|_1}

Non-imaging frameworks generalize this, e.g., by projecting the predicted quantitative maps via embedded Bloch-based MRI signal models and comparing to acquired weighted images (Lune et al., 8 Jan 2026), or by matching surrogate physics models (Nusselt number) and balancing physics-data losses via learned uncertainty weights (Pirayeshshirazinezhad, 7 Sep 2025). For differentiable mechanics (Wang et al., 2024), the loss is the expected physical energy plus constraint penalties.

4. Application Domains and Empirical Results

Self-supervised physics-guided frameworks have been validated across a diverse array of physical-science and engineering tasks:

  • Accelerated MRI: Self-supervised (SSDU, multi-mask SSDU) unrolled networks achieve NMSE 0.016\simeq 0.016 and SSIM 0.934\simeq 0.934, essentially closing the gap with fully supervised methods, and outperforming CG-SENSE and compressed sensing (Yaman et al., 2019, Yaman et al., 2019, Yaman et al., 2020, gu et al., 2023). Reader studies confirm diagnostic equivalence.
  • fMRI/Non-Cartesian Trajectories: Self-supervised unrolled deep networks extend to high acceleration (R=1020R=10{-}20) SMS or spiral fMRI, reliably preserving temporal SNR and functional mapping (Demirel et al., 2021, gu et al., 2023).
  • Quantitative MRI: Physics-guided self-supervision enables estimation of T1/T2/PD maps from clinical T1w/T2w/FLAIR without quantitative ground-truth, yielding per-scanner coefficient-of-variation 1.1%\leq 1.1\% and high intra-subject reproducibility (Lune et al., 8 Jan 2026).
  • Materials Property Prediction: DSSL (dual self-supervised learning) combines node masking, contrastive SSL, and physics-guided microproperty pretext tasks, attaining up to 26.89%26.89\% improvement in MAE for elastic moduli relative to baseline GNN encoders (Fu et al., 2024).
  • Video Physics Parameter ID: Self-supervised differentiable physics engines infer object mass and friction from rendered videos and action sequences, achieving physical parameter errors 510%\sim 5{-}10\% (Kandukuri et al., 2020).
  • Scientific Simulation: Neural Modes learn nonlinear modal subspaces by minimizing expected mechanical energy, outperforming PCA and autoencoders by an order of magnitude in energy, stress, and internal force error (Wang et al., 2024).
  • Surrogate Modeling: SPINN leverages a physics-informed loss with a learnable data/physics trade-off, achieving Nusselt number prediction MAPE within +8%+8\% of CFD while outperforming kernel and classic PINN baselines (Pirayeshshirazinezhad, 7 Sep 2025).
  • Optics/Holography: GedankenNet, trained only on random synthetic images and physics consistency, reconstructs complex fields consistent with Maxwell’s equations robustly across perturbations, outperforming classical phase retrieval in ECC and SSIM (Huang et al., 2022).

5. Variants, Extensions, and Domain-Specific Implementations

The framework admits multiple variants and domain-specific adaptations:

  • 3D and Dynamic Imaging: Adapted to volumetric and spatiotemporal sequences, with modifications for GPU-memory efficiency (e.g., slab extraction (Yaman et al., 2020); per-frame or per-slab unrolling (Demirel et al., 2021)).
  • Non-Cartesian Sampling: Toeplitz kernel factorization and gridded self-supervised loss computation optimize training for spiral/radial k-space (gu et al., 2023).
  • Implicit Neural Representations: Coordinate-based INRs enable regularization and spatial continuity in scan-specific zero-shot settings (Xu et al., 8 Oct 2025).
  • Materials Graph Learning: GNNs are augmented with physics-pretext tasks (e.g., local atomic stiffness) and multi-view contrastive learning for improved transferability and convergence (Fu et al., 2024).
  • Physics-informed Balancing: Uncertainty-aware learned coefficients between physics and data terms (e.g., SPINN's physics-coefficient neuron (Pirayeshshirazinezhad, 7 Sep 2025)) allow optimal adaptation in data-scarce or domain-shifted settings.

6. Strengths, Limitations, and Implications

Key strengths include:

  • Ability to learn in the absence of fully-sampled or labeled references, crucial for studies where physical or physiological constraints preclude exhaustive acquisition (e.g., high-res MRI, rare materials datasets).
  • Performance parity with supervised networks, demonstrated by quantitative metrics and expert evaluation (Yaman et al., 2019, Yaman et al., 2020, Lune et al., 8 Jan 2026).
  • Embedding physical models enforces domain consistency and prevents nonphysical solutions or hallucinations, supporting robust generalization (Huang et al., 2022, Wang et al., 2024).

Limitations:

  • Performance and artifact suppression depend on careful selection of mask-split parameters (ρ\rho, multi-mask design), regularizer strength, and capacity of both physics and network modules.
  • Model memory and compute scales with model complexity—3D or multi-contrast deployments remain hardware-limited (Yaman et al., 2020).
  • For material property learning, relevance and match of physics-pretext tasks to target properties is critical; nonaligned choices may harm downstream error (Fu et al., 2024).

Plausible implications include the ability to unify learning-based and traditional inversion for high-stakes scientific domains, robust cross-protocol harmonization (e.g., in neuroimaging), and direct learning of physically interpretable representations in ill-posed/high-dimensional parameter regimes.

7. Outlook and Future Directions

Current trends in self-supervised physics-guided deep learning frameworks include:

  • Expanding to more complex and nonlinear physical domains (e.g., nonlinear elasticity, electrodynamics with Maxwell solvers).
  • Incorporating model-based uncertainty quantification and adaptive trade-offs.
  • Joint learning of mask design and model weights.
  • Meta-learning or transfer learning to reduce scan- or sample-specific optimization cost (Xu et al., 8 Oct 2025).
  • Fully unsupervised generative modeling with embedded physical constraints.

Open research directions point toward integrated, generalizable, and robust deep learning systems for scientific discovery, engineering design, and clinical translation in domains with limited direct ground truth, grounded by first-principles physics and rigorous self-supervision.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Self-Supervised Physics-Guided Deep Learning Framework.