Papers
Topics
Authors
Recent
2000 character limit reached

Extended PINNs (XPINN) Overview

Updated 9 January 2026
  • XPINNs are a domain-decomposition extension of PINNs that partition space-time domains into subdomains with dedicated neural networks to enforce continuity via interface loss terms.
  • XPINNs enable parallel training and per-subdomain hyperparameter tuning, improving scalability, expressivity, and efficiency in multi-scale and multi-physics PDE applications.
  • XPINNs balance reduced local complexity against increased overfitting risk, providing performance benefits over monolithic PINNs when local solution features are simpler.

Extended Physics-Informed Neural Networks (XPINNs) are a prominent domain-decomposition extension of the Physics-Informed Neural Network (PINN) framework for learning solutions of partial differential equations (PDEs) using deep neural networks. XPINNs generalize and improve upon classical PINNs by decomposing the computational domain into multiple space-time subdomains, each equipped with a dedicated network, and enforcing solution and PDE-residual continuity at subdomain interfaces via carefully constructed loss terms. XPINNs have propelled advances in the parallelization, expressivity, and scalability of neural PDE solvers, with significant impact on multi-scale, multi-physics, and high-dimensional problems (Shukla et al., 2021, Hu et al., 2021, Rehman et al., 5 Nov 2025).

1. Mathematical Formulation and Domain Decomposition

XPINNs partition the space or space-time domain ΩRd\Omega\subset\mathbb{R}^d into NsdN_{\text{sd}} disjoint subdomains Ωq\Omega_q (q=1,,Nsdq=1,\dots,N_{\text{sd}}) such that

Ω=q=1NsdΩq,ΩqΩq= for qq.\Omega = \bigcup_{q=1}^{N_{\text{sd}}} \Omega_q, \quad \Omega_q \cap \Omega_{q'} = \emptyset \text{ for } q\neq q'.

In the most general setting, each subdomain can be written as Ωq=Ωqx×Ωqt\Omega_q = \Omega_q^{x} \times \Omega_q^t, permitting decomposition over both space and time. The interfaces between subdomains, denoted as Γq,q=ΩqΩq\Gamma_{q,q'} = \partial\Omega_q \cap \partial\Omega_{q'}, may share complex and irregular shapes, as well as temporal intervals (Shukla et al., 2021).

For each subdomain Ωq\Omega_q, XPINN introduces an independent neural network uΘq(x,t):ΩqRu_{\Theta_q}(x,t): \Omega_q\to\mathbb{R} parameterized by Θq\Theta_q. Each network may have a custom architecture (number of layers, widths, activation types, and locally adaptive nonlinearities), which enables adaptive allocation of modeling capacity to locally complex solution regions (Shukla et al., 2021, Elfetni et al., 2024).

2. Loss Functions, Interface Coupling, and Optimization

The XPINN framework enforces both physical law and inter-network compatibility by minimizing a composite loss function. For subdomain qq, the loss comprises:

  • Data loss MSEu,qMSE_{u,q}: fits known data (boundary/initial/measurement),
  • Physics loss MSEF,qMSE_{\mathcal{F},q}: minimizes the squared residual of the governing PDE,
  • Interface losses: enforce weak continuity across Γq,q\Gamma_{q,q'} for both the solution ("average-solution" continuity) and, crucially, the PDE residual ("residual continuity").

Explicitly,

MSEu,q=1Nu,qi=1Nu,quΘq(xu,q(i))uq,exact(i)2, MSEF,q=1NF,qi=1NF,qF(uΘq)(xF,q(i))2, MSEuˉ,q=qneigh(q)1NI,qi=1NI,quΘq(xI,q(i))uΘq+uΘq2(xI,q(i))2, MSER,q=qneigh(q)1NI,qi=1NI,qF(uΘq)(xI,q(i))F(uΘq)(xI,q(i))2.\begin{aligned} MSE_{u,q} &= \frac{1}{N_{u,q}}\sum_{i=1}^{N_{u,q}} \left|u_{\Theta_q}(x_{u,q}^{(i)}) - u_{q,\text{exact}}^{(i)}\right|^2, \ MSE_{\mathcal{F},q} &= \frac{1}{N_{F,q}}\sum_{i=1}^{N_{F,q}} \left|\mathcal{F}(u_{\Theta_q})(x_{F,q}^{(i)})\right|^2, \ MSE_{\bar{u},q} &= \sum_{q'\in\text{neigh}(q)}\frac{1}{N_{I,q}}\sum_{i=1}^{N_{I,q}} \left|u_{\Theta_q}(x_{I,q}^{(i)}) - \frac{u_{\Theta_q} + u_{\Theta_{q'}}}{2}(x_{I,q}^{(i)})\right|^2, \ MSE_{\mathcal{R},q} &= \sum_{q'\in\text{neigh}(q)}\frac{1}{N_{I,q}}\sum_{i=1}^{N_{I,q}} \left| \mathcal{F}(u_{\Theta_q})(x_{I,q}^{(i)}) - \mathcal{F}(u_{\Theta_{q'}})(x_{I,q}^{(i)}) \right|^2. \end{aligned}

Each subdomain’s total loss is

Jq(Θq)=Wu,qMSEu,q+WF,qMSEF,q+WI,qMSEuˉ,q+WI,F,qMSER,q,\mathcal{J}_q(\Theta_q) = W_{u,q}MSE_{u,q} + W_{\mathcal{F},q}MSE_{\mathcal{F},q} + W_{I,q}MSE_{\bar{u},q} + W_{I,\mathcal{F},q}MSE_{\mathcal{R},q},

with WW denoting tunable weights. The global XPINN loss aggregates over all subdomains: JXPINN=q=1NsdJq(Θq).\mathcal{J}_{\rm XPINN} = \sum_{q=1}^{N_{\rm sd}} \mathcal{J}_q(\Theta_q). Interface losses are "soft" constraints; there is no hard parameter sharing across networks (Shukla et al., 2021, Hu et al., 2021).

3. Parallel Training and Hyperparameter Management

XPINNs enable distributed training by assigning each subdomain network to a distinct computational resource (e.g., separate MPI rank, CPU, or GPU). Inter-network communication is limited to low-dimensional interface buffers (predictions and residuals on Γq,q\Gamma_{q,q'}), eliminating the need for expensive all-reduce across full network weights. This scheme provides:

  • Intrinsic parallelism and scalability, demonstrated by nearly linear weak and strong scaling to dozens of GPUs/CPUs, with speedups of 7–20× on large-scale problems (Shukla et al., 2021).
  • Per-subdomain hyperparameter optimization (network depth, width, activation, sampling density, loss weights), which is conducted independently and in parallel across all subdomains, facilitating adaptation to local solution complexity and efficient grid/Bayesian search (Shukla et al., 2021, Elfetni et al., 2024).

The parallel algorithm consists of domain decomposition, assignment and sampling of data/residual/interface points, interface communication (non-blocking MPI send/recv), loss construction, and local network updates via optimizers such as Adam or L-BFGS (Shukla et al., 2021).

4. Generalization, Theoretical Analysis, and Practical Trade-offs

XPINN’s principal theoretical advance is the decomposition of complex global solutions into simpler, locally smooth pieces, thereby reducing the required network complexity per subdomain. Theoretical work (Hu et al., 2021) quantifies a trade-off:

  • Positive effect: Lower solution complexity uWL(Ω)\|u^*\|_{\mathcal{W}^L(\Omega)} is replaced by the (typically smaller) sum of per-subdomain complexities i(nr,i/nr)uWL(Ωi)\sum_i \left(n_{r,i}/n_r\right)\|u^*\|_{\mathcal{W}^L(\Omega_i)} in the generalization bound.
  • Negative effect: Each subnetwork is trained on a reduced data set, increasing the risk of overfitting if nr,in_{r,i} is small, and thus raising the statistical error term (scaling as lnnr,i/nr,i\ln n_{r,i}/\sqrt{n_{r,i}}).
  • Empirical results: XPINN outperforms PINN when the global solution complexity is high but local complexities are low, and subdomain datasets are sufficiently large. If local complexity reduction is marginal and data per subdomain is limited, XPINN may underperform the monolithic PINN (Hu et al., 2021).

The table below synthesizes conditions for XPINN effectiveness relative to PINN:

Solution Structure XPINN vs. PINN Performance Example
Piecewise simple/local structure XPINN better than PINN 2D Euler, advection eq.
Homogeneously complex/global PINN better than XPINN Heat, Poisson eq.
Intermediate cases Comparable, threshold effect Parameterized trigonometric examples

Each scenario is substantiated by testing on a range of PDEs, with theoretical and empirical alignment (Hu et al., 2021).

5. Applications and Extensions

XPINNs have been deployed across a broad set of challenging PDE settings:

  • Multi-phase-field (MPF) problems: The XPINN paradigm adapts naturally to three-dimensional (space × time × phase) decompositions, with each batch handled by an independent PINN. A Master neural network orchestrates data transfer and inter-network coupling, particularly in enforcing physical constraints across phase interfaces (Elfetni et al., 2024).
  • Hyperbolic conservation laws: The Buckley–Leverett equation (nonlinear, hyperbolic) with shocks is solved by dynamically tracking the evolving discontinuity and assigning subdomains to pre-shock and post-shock regions; Rankine–Hugoniot interface losses enforce flux continuity (Rehman et al., 5 Nov 2025).
  • Problems with limited or noisy data: Bayesian XPINNs (B-XPINNs) employ variational inference over network weights, facilitating uncertainty quantification and improved calibration, especially in settings with strong physical constraints or multi-valued solutions (Landgren et al., 28 Sep 2025).

In all cases, XPINN offers greater flexibility in capturing multi-scale and multi-physics effects, handling irregular and time-varying decompositions, and reducing pathologies common in monolithic PINNs (such as gradient localization and parameter inefficiency) (Shukla et al., 2021, Elfetni et al., 2024, Rehman et al., 5 Nov 2025).

XPINNs are not universally optimal. Their limitations include:

  • Requirement for careful tuning of interface penalty weights WI,q,WI,F,qW_{I,q}, W_{I,\mathcal{F},q} for training stability and interface accuracy.
  • Potential for load imbalance and inefficient learning if subdomain decomposition leads to uneven sample distributions.
  • Absence of global convergence proofs—a situation mitigated in practice by robust empirical performance but remaining an open mathematical question (Shukla et al., 2021).
  • XPINNs may underperform on globally smooth or homogeneously complex problems if domain decomposition does not yield substantially simpler local problems or if data in each subdomain is limited (Hu et al., 2021).

Alternative approaches, notably Augmented PINNs (APINNs), introduce soft, learnable domain decompositions via gating networks, allowing for flexible parameter sharing and global access to training samples while obviating explicit interface losses. APINN has been shown to uniformly match or surpass XPINN in a variety of benchmarks, especially when a good soft decomposition is learnable (Hu et al., 2022).

Comparison Table: PINN, XPINN, and APINN

Architecture Decomposition Type Interface Coupling Parameter Sharing Data Availability
PINN None N/A Full sharing (single net) All data to one net
XPINN Hard (fixed) Soft interface loss (solution & PDE) None between sub-domains Subdomain-local only
APINN Soft (learnable) No explicit interface; uses gating Shared trunk, local heads Full domain for all

This suggests that APINN generalizes XPINN by relaxing the interface and enabling flexible discovery of optimal decompositions (Hu et al., 2022).

7. Outlook and Future Directions

XPINNs’ multidomain decomposition strategies continue to inspire further extensions:

  • Adaptive decomposition (automatic partitioning based on local solution features),
  • Pyramidal or hierarchical training schedules for large-scale and time-evolving domains (Elfetni et al., 2024),
  • Coupling with Bayesian inference for robust uncertainty quantification in the presence of scarce or noisy data (Landgren et al., 28 Sep 2025),
  • Extensions to higher-dimensional, moving interface, and multi-branch solutions as in multi-phase or multi-valued PDEs,
  • Integration of dynamic interface tracking for problems with evolving discontinuities (Rehman et al., 5 Nov 2025).

XPINNs constitute a foundational methodology for large-scale, parallelizable, and adaptive physics-informed deep learning of complex PDE systems. Their ongoing development and integration with soft gating, Bayesian, and master-coupling architectures indicate a sustained trajectory of growth and innovation within the scientific machine learning community.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Extended PINNs (XPINN).