Physics-Constrained Gaussian Process Regression
- Physics-constrained Gaussian Process Regression integrates physical knowledge via tailored kernels, constraints, and augmented likelihoods to enforce laws like conservation and boundary conditions.
- It reduces data requirements and improves extrapolation reliability by embedding physical structures directly into the regression framework.
- This approach is applied in surrogate modeling, inverse problems, and state estimation, offering enhanced interpretability and controlled uncertainty.
Physics-constrained Gaussian process regression (GPR) refers to a family of modeling frameworks that incorporate physical laws, analytical constraints, or mechanistic knowledge directly into the prior, kernel, likelihood, or learning process of GPR. This methodology enhances data efficiency, robustness to extrapolation, and guarantees that predictions adhere to fundamental physical principles, such as conservation, boundary conditions, monotonicity, or integral constraints.
1. Fundamental Principles and Motivation
Standard GPR is a nonparametric Bayesian regression technique that models latent functions as a Gaussian process with a mean function and a covariance kernel. While extremely flexible, unconstrained GPR relies purely on data and typically assumes stationarity, leading to poor performance in data-scarce high-dimensional settings or when the underlying system is governed by strong physical laws. Physics-constrained GPR embeds these known physical structures, reducing the data required for accurate prediction and providing posterior distributions that are consistent with governing equations or inequalities (Chang et al., 2022, Ma et al., 2020, Cross et al., 2023).
Incorporating physics can be achieved by:
- Designing kernels consistent with Green’s functions or spectral properties of PDEs.
- Imposing constraints on the GP prior or mean (e.g., boundary conditions).
- Augmenting the likelihood or the evidence with penalties or soft-constraints derived from physical residuals.
- Constraining optimization or posterior inference to satisfy monotonicity, normalization, or nonnegativity.
2. Classes of Physical Constraints and Formalisms
Physics constraints in GPR frameworks manifest at multiple levels:
a) Hard Constraints and Boundary Enforcement
Formulation of the GP prior so that all realizations satisfy boundary conditions or operator constraints. E.g., using spectral expansions in the eigenfunctions of the domain’s Laplacian or linear PDE operator, the covariance is constructed as
where solve the BVP with prescribed boundary conditions (Gulian et al., 2020, Solin et al., 2019, Seyedheydari et al., 4 Jul 2025). This ensures both the posterior mean and all sample paths satisfy the BCs by construction.
b) Physics-informed Likelihoods and Evidence
Physical knowledge is encoded with a Boltzmann–Gibbs factor in the evidence,
where quantifies violations of residuals, boundary misfit, or variational energy, and tunes the strength of the constraint. The marginal evidence becomes
which regularizes the data-fit by enforcing proximity to physical laws (Chang et al., 2022).
c) Prior Statistics from Stochastic Models
Empirical mean and (nonstationary) covariance are computed from Monte Carlo realizations of stochastic PDEs or SDEs, so that predicted mean fields automatically satisfy physical linear operators:
with the guarantee that the Kriging posterior mean preserves the constraints (e.g., ) up to simulator error (Yang et al., 2018, Ma et al., 2020).
d) Derivative-based and Inequality Constraints
Monotonicity, convexity, nonnegativity, or normalization can be enforced via transformation of the GP prior over functionals, e.g., including derivatives in the joint GP and imposing pointwise constraints on those derivatives or integrals. For monotonic GPs, a probit likelihood on the first derivatives is employed at a set of inducing points:
with expectation propagation used for posterior inference (Tran et al., 2022, Pensoneault et al., 2020, Li et al., 6 Jan 2026, Seyedheydari et al., 4 Jul 2025).
3. Representative Methodologies
Deep Kernel Physics-Constrained GPR
A hybrid approach encodes physical laws via a Boltzmann–Gibbs term and parametrizes the covariance as a deep kernel:
where is a neural-network feature map trained jointly with physics-constrained loss. Minibatched training with “stochastic inducing points” renders computation tractable, enabling uncertainty propagation in high-dimensions with minimal labeled data (Chang et al., 2022).
Joint Posterior with Constraints
The generalized joint prior includes both function values and their transforms:
Conditioning on observations of and yields a posterior that preserves operator constraints globally (Gulian et al., 2020, Cross et al., 2023).
Regularization by Lagrange Multipliers or Pseudo-Observations
Enforcing normalization, conservation, or integral constraints can be cast via Lagrange multiplier optimization in a kernel eigenbasis, or by augmenting the observation vector with “pseudo-measurements” for the constraint:
leading to joint posteriors that satisfy the constraint exactly (Seyedheydari et al., 4 Jul 2025).
Hybrid Multi-fidelity and Multifidelity GPs
Low-fidelity predictions from physics-based simulators are combined with high-fidelity data via autoregressive co-kriging:
with physics-informed or Monte-Carlo built, and a GP for the discrepancy, so the aggregated model leverages physical structure and corrects for model bias (Yang et al., 2018).
4. Quantitative Impact, Data Efficiency, and Limitations
Physics-constrained GPR demonstrates marked advantages:
- Order-of-magnitude reduction in required labeled data for accurate uncertainty quantification in PDE surrogates (from to samples) (Chang et al., 2022).
- Preservation of essential physical constraints even under extrapolation, e.g., non-decreasing strength with pressure in surrogate concrete constitutive models, with NRMSE reduced from 11.6% (unconstrained) to 6.7% (constrained), and R² increasing from 0.49 to 0.88 (Li et al., 6 Jan 2026).
- Propagation of non-stationary, physically meaningful covariances through the forecast horizon in power systems and multivariate coupled oscillatory systems (Ma et al., 2020, Cross et al., 2023).
- Posterior variances and uncertainty bands that contract dramatically in physics-constrained regions or under functional constraints (e.g., monotonicity or normalization) (Tran et al., 2022, Seyedheydari et al., 4 Jul 2025).
- In ill-posed inverse problems (e.g., inverse light scattering), constrained GPR prevents physically implausible solutions, guaranteeing property enforcement such as normalization or boundedness (Seyedheydari et al., 4 Jul 2025).
However, key limitations and trade-offs emerge:
- Setting the scalar or the relative weighting between data-fit and physical loss requires tuning for optimal bias-variance trade-off (Chang et al., 2022).
- In non-Gaussian or non-linear constraint settings, the posterior may only be available in approximate form (e.g., via expectation propagation or variational inference) (Tran et al., 2022).
- Physical constraints may restrict extrapolation flexibility if not properly constructed.
- Constructing eigenfunction bases on complex domains or with uncertain operators may require significant computation (Gulian et al., 2020, Solin et al., 2019).
5. Computational Complexity and Practical Algorithms
Typical unconstrained GPR scales as with training points due to kernel inversion. Physics-constrained approaches introduce several computational optimizations:
- Spectral truncation to eigenmodes yields complexity , achieving efficient reduced-rank inference (Solin et al., 2019, Gulian et al., 2020, Seyedheydari et al., 4 Jul 2025).
- Stochastic inducing-point strategies allow scaling to minibatches without loss of constraint enforcement in deep kernel hybrid GPs (Chang et al., 2022).
- For monotonicity and derivative constraints, moderate (10–20) derivative-inducing points keep expectation-propagation tractable () in small-to-moderate settings (Tran et al., 2022).
- In multifidelity frameworks, only a small number of discrepancy kernel hyperparameters require optimization, as the physics-informed prior is fixed (Yang et al., 2018, Yang et al., 2018).
6. Applications, Case Studies, and Empirical Performance
Physics-constrained GPR has been demonstrated in a wide range of domains:
- Surrogate modeling for PDE-governed fields—diffusion, channel flow, or elasticity—where deep-kernel hybrid GPR provides UQ with minimal supervision (Chang et al., 2022).
- Probabilistic state estimation in power grids using SDE-based physics-informed priors delivering nonstationary, cross-covariant joint predictions for observed and unobserved states (Ma et al., 2020).
- Constitutive law surrogate modeling under derivative monotonicity constraints, outperforming empirical models in reliability and uncertainty estimates (Li et al., 6 Jan 2026).
- Inverse problems such as Fredholm integral equation inversion for particle size distributions, enforcing normalization for physically plausible inference (Seyedheydari et al., 4 Jul 2025).
- Enforcement of holonomic and non-holonomic mechanical constraints in learned rigid-body dynamics models using Gauss’ principle projections (Geist et al., 2020).
- Model predictive control for nonlinear systems via GPs constructed with Smith-normal-form-regularized kernels encoding the local linearization of the nonlinear ODE at stable equilibria (Lepp et al., 30 Apr 2025).
A pattern emerges of substantial improvements in reliability, interpretability, and robustness, especially in extrapolation regimes or where labeled data are scarce.
7. Outlook and Extensions
Future directions include:
- Extending frameworks to time-dependent or nonlinear PDEs.
- Developing alternative kernel architectures (e.g., graph-based, attention-driven) or operator-based constraints tailored to more complex domains (Chang et al., 2022).
- Generalization to multi-output and coupled systems via latent force models or co-kriging structures (Camps-Valls et al., 2020, Cross et al., 2023).
- Rigorous theoretical analysis of generalization, error bounds, and the impact of regularization by physics-constrained evidence.
- Scalability advances through efficient computation of spectral bases, sparse variational methods, and exploiting structure in complex operators or domains.
Physics-constrained GPR provides a principled path to combining statistical flexibility with mechanistic rigor, bridging the gap between machine learning and scientific computing for uncertainty-aware, data-efficient surrogate and inference tasks across scientific and engineering domains (Chang et al., 2022, Cross et al., 2023).