Papers
Topics
Authors
Recent
Search
2000 character limit reached

Physics-Constrained Gaussian Process Regression

Updated 11 April 2026
  • Physics-constrained Gaussian Process Regression is a method that integrates physical laws, such as PDEs and boundary conditions, into GP models for precise and data-efficient inference.
  • It employs eigenfunction-based kernel construction and spectral expansion to ensure that all samples rigorously satisfy the imposed physical and boundary constraints.
  • The approach supports multifidelity and hybrid deep-kernel designs, offering scalable solutions with robust uncertainty quantification in high-dimensional or scarce data scenarios.

Physics-constrained Gaussian Process Regression (GPR) refers to a class of Gaussian process modeling techniques in which physical laws—typically in the form of partial differential equations (PDEs), boundary conditions, or more generally linear operator constraints—are exactly or probabilistically enforced on the GP prior, the kernel, the mean, or the structure of inference. These methods allow the solution or surrogate to be informed not only by data but also by mechanistic knowledge, thereby enabling data-efficient, stable, and physically consistent regression for high-dimensional or scarce-data problems commonly encountered in scientific and engineering domains.

1. Foundations: GP Priors under Differential and Boundary Constraints

Physics-constrained GPR formalizes the inference of latent functions u(x)u(x) that are known, a priori, to satisfy a (typically linear) boundary-value problem

Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega

where LL is a linear second-order differential operator (e.g., elliptic, parabolic, hyperbolic), and BB denotes homogeneous boundary conditions such as Dirichlet (u=0u=0), Neumann (∂u/∂n=0\partial u/\partial n=0), or mixed types (Gulian et al., 2020). The essence of the physics-constrained paradigm is to choose a GP prior on uu such that every prior sample u∼GP(mu,Ku)u\sim GP(m_u, K_u) satisfies—either exactly or to a quantifiable error—the required operator and boundary constraints (Pförtner et al., 2022, Henderson et al., 2021).

For linear operator constraints, necessary and sufficient conditions for a stochastic process to satisfy T[u]=0T[u]=0 in the sense of distributions is that both the mean mum_u and each kernel section Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega0 are annihilated by Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega1; i.e., Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega2 and Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega3 in Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega4 (Henderson et al., 2021).

2. Physics-Constrained Kernel Construction

A canonical approach to building physics-constrained kernels is to expand Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega5 in a truncated set of eigenfunctions Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega6 of the homogeneous BVP,

Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega7

and define

Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega8

with Lu(x)=f(x),x∈Ω,Bu(x)=0,x∈∂ΩL u(x) = f(x), \quad x\in\Omega,\qquad B u(x) = 0, \quad x\in\partial\Omega9 determined from the spectral density of a parent kernel or learned from data. By restricting to LL0, all samples and the posterior mean exactly satisfy LL1 (Gulian et al., 2020). The kernel for LL2 is then LL3. Cross-covariances LL4 are similarly constructed.

Alternatively, for vector-valued PDEs (e.g., incompressible fluid flow), divergence-free and boundary-admissible covariance structures are constructed by taking vector differential operators of a scalar kernel and applying continuous boundary projections via the Karhunen–Loève (Mercer) expansion restricted to the domain boundary (Padilla-Segarra et al., 23 Jul 2025). Recent frameworks also enable arbitrary bounded linear operator constraints, allowing the use of observation functionals corresponding to point evaluation, domain integrals, or weighted residuals (Pförtner et al., 2022).

3. Posterior Inference and Reduced-Rank Computation

Given observations LL5 potentially of both LL6 and LL7 at locations LL8, the joint GP for LL9 has a block covariance assembled by the above operator actions. The posterior mean and covariance for any new BB0 are then computed via standard GP conditioning: BB1

BB2

with BB3 denoting the augmented covariance with noise terms (Gulian et al., 2020).

For spectral expansions, computation can be reduced to BB4 (assembly), BB5 (inversion), and BB6 (per-prediction) via reduced-rank matrix identities (Woodbury formula). This provides significant acceleration when BB7 (number of modes) is much less than BB8 (number of observations) (Gulian et al., 2020).

4. Comparison to Unconstrained or Softly Constrained Approaches

Standard "physics-informed" GPR constructs the joint kernel by acting differential operators on stationary kernels (e.g., squared-exponential) and typically enforces PDE or boundary constraints via penalization, "soft" virtual points, or by adding data at the boundary (Gulian et al., 2020, Cross et al., 2023). Such approaches cannot ensure zero posterior variance at BB9, nor exact satisfaction of boundary conditions—posterior means may drift with sparse data, and credible intervals inflate near the boundaries.

By leveraging eigenfunction-based kernels or direct projection in function space, physics-constrained GPR ensures that both the posterior mean and samples lie precisely within the constraint manifold. This yields sharper extrapolation, precise uncertainty quantification, and increased stability, especially in the sparse-data regime (Gulian et al., 2020, Pförtner et al., 2022).

5. Integration with Multifidelity and Data-Driven Enhancements

Modern frameworks extend physics-constrained GPR with multifidelity and discrepancy models. A low-fidelity GP is constructed via physics-informed Kriging from stochastic realizations of imperfect simulations, enforcing the operator constraint up to a quantifiable error. A data-driven GP for the discrepancy is then co-kriged with the physics GP, with the joint model trained via marginal likelihood. Error bounds for constraint preservation, as well as active learning strategies targeting uncertainty maxima, are naturally accommodated (Yang et al., 2018, Yang et al., 2018).

Hybrid schemes also combine deep-kernel learning with physics regularization, using neural-network-based feature maps where the loss function includes both the standard GP likelihood and a physics-induced penalty (e.g., Boltzmann–Gibbs weight for the PDE residual). This approach enables scalable inference in high-dimensional or complex-geometry settings where traditional spectral methods may be prohibitive (Chang et al., 2022).

6. Applications and Extensions

Physics-constrained GPR has been deployed for:

7. Computational Considerations and Limitations

Exact enforcement of physics and boundary constraints comes at the price of computing eigenfunctions on arbitrary domains, which is tractable in regular geometries or via numerical eigensolvers. For large-scale or high-dimensional domains, scalable variants using inducing points, deep-kernel surrogates, or stochastic approximations are under development (Chang et al., 2022). The choice of basis size u=0u=00 sets a tradeoff between expressivity and computational cost. In problems where the PDE coefficients themselves are uncertain, the joint GP is extended to include these as random fields, enabling full Bayesian propagation of parametric uncertainty to the solution (Pförtner et al., 2022).

Physics-constrained GPR is not immune to model-form misspecification; if the physical operator or boundary description omits crucial mechanisms, the data-misfit will localize near these regions, and the model may be over-constrained or under-adaptive. In hybrid designs, identifiability between the physics-informed and data-driven kernel components may require careful regularization or cross-validation (Cross et al., 2023).


Physics-constrained Gaussian Process Regression constitutes a unifying probabilistic framework that rigorously interpolates between classic numerical PDE discretization, statistical machine learning, and inverse modeling. By exactly encoding physics into the model, it substantially reduces data requirements, provides robust extrapolation guarantees, and equips practitioners with a principled mechanism for uncertainty quantification and error control across a wide range of scientific and engineering inference tasks (Gulian et al., 2020, Pförtner et al., 2022, Padilla-Segarra et al., 23 Jul 2025, Li et al., 6 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Physics-Constrained Gaussian Process Regression (GPR).