Domain Decomposition PINNs: CPINN & XPINN
- Domain decomposition PINNs are methods that divide the problem domain into smaller subdomains, each managed by a local neural network with enforced interface constraints.
- CPINN employs penalty terms for state and flux continuity, while XPINN extends this by also penalizing residual discontinuities across space–time interfaces.
- These techniques enhance scalability and accuracy for solving complex PDEs, addressing challenges like poor convergence and limited data utilization in standard PINNs.
Domain decomposition Physics-Informed Neural Networks (PINNs) refer to a class of methodologies for solving partial differential equations (PDEs) and related inverse problems in which the computational domain is decomposed into smaller subdomains, each handled by a separate neural network, with interface conditions enforcing physical or mathematical consistency across subdomain boundaries. Conservative PINNs (CPINN) and extended PINNs (XPINN) are the two principal archetypes of this approach, each offering distinct interface formulations, parallelization strategies, and suitability for different classes of PDEs. These frameworks are motivated by challenges inherent to vanilla (single-network) PINNs such as poor scalability, slow convergence for multiscale or stiff problems, and degraded accuracy for solutions with sharp gradients, discontinuities, or spatially heterogeneous features.
1. Fundamental Principles and Mathematical Formulation
In CPINN and XPINN, the global domain (spatial, or space–time) is partitioned into non-overlapping (CPINN, XPINN) or overlapping (FBPINN, PINN Balls) subdomains . Each subdomain is assigned an independent neural network with parameters , and the global solution is represented as
where is the characteristic function for non-overlapping subdomains (CPINN/XPINN), or as a smooth partition-of-unity sum with localized window functions for overlapping/deep DDM or FBPINN approaches (Shukla et al., 2021, Dolean et al., 2022, Saha et al., 2024).
The total loss function in CPINN/XPINN is a composition of four key terms:
- PDE residual loss: Each network minimizes the squared residual of the governing PDE within its subdomain.
- Boundary loss: Standard Dirichlet or Neumann losses enforce boundary values.
- Interface continuity loss: Quadratic penalties enforce continuity of state (solution) and flux (for conservation laws) across subdomain interfaces.
- (XPINN only) Residual continuity loss: Penalty on jumps in the PDE residual across interfaces, which is crucial for general PDEs and space–time decompositions.
For XPINN, this structure is extended to support decomposition in both space and time, facilitating space–time domain splits critical for unsteady, multiscale, or dynamically evolving problems (Shukla et al., 2021, Hu et al., 2021, Rehman et al., 5 Nov 2025).
2. Interface Coupling Mechanisms and Variants
The interface terms distinguish CPINN and XPINN:
- CPINN imposes solution and normal-flux continuity at spatial interfaces via soft (penalty) constraints:
- XPINN adds residual continuity and supports interfaces in time and space:
- Both architectures allow additional penalty terms (e.g., for normal-derivative or higher-order continuity in complex PDEs), but no Lagrange multipliers or hard enforcement; all subdomain coupling is realized via soft constraints (Shukla et al., 2021, Hu et al., 2021, Dolean et al., 2022, Figueres et al., 26 Apr 2025).
For highly stiff or advective problems, interface terms can be reformulated to enforce strong Dirichlet data or via ansatz-based embeddings in each local network, although CPINN/XPINN in their original definitions use penalty approaches (Snyder et al., 2023).
3. Parallelization, Scalability, and Training Algorithms
A primary motivation for CPINN and XPINN is parallelization and scalability. Each subdomain network is trained independently, subject to synchronized updates or local interface data exchange. Implementations assign each subnetwork to an MPI rank (distributed CPU/GPU nodes), with local batches of points for interior, boundary, and interface losses communicated only to adjacent ranks, minimizing communication overhead (Shukla et al., 2021).
Key parallel regime features:
| Method | Partition | Interface Enforcement | Communication | Time Decomposition |
|---|---|---|---|---|
| CPINN | Spatial | State + Flux (penalty) | Small buffers (pts) | No |
| XPINN | Space–time | State + Flux + Residual (pen.) | Small buffers | Yes (slab/block) |
| Data-Parallel PINN | None | N/A | Full all-reduce | Not supported |
XPINN provides maximal hyperparameter and architecture flexibility—each subnetwork can be assigned bespoke learning rates, depths, widths, and even activations or optimizer schedules to match local complexity, critical for multiscale/multiphysics or heterogeneous PDEs (Shukla et al., 2021, Saha et al., 2024).
The additive Schwarz paradigm (all subdomains update in parallel) dominates for large-scale, parallel scalability, whereas multiplicative Schwarz (sequential or colored updates) offers convergence advantages at small domain counts but is less scalable (Dolean et al., 2022, Snyder et al., 2023).
4. Generalization, Accuracy, and Trade-offs
Theoretical results for XPINN demonstrate a trade-off between reduced local function complexity and lower sample count per subdomain. For a function with high complexity globally, decomposition into subdomains can substantially reduce the function complexity (Barron norm) needed on each subnetwork, which tightens prior and posterior generalization bounds. However, fewer training points per subdomain inflate variance, potentially worsening generalization if the reduction in complexity does not compensate (Hu et al., 2021).
This is expressed quantitatively as: where is the Barron norm of , the norm on , and , are residual point counts.
Empirical tests: XPINN outperforms PINN when the solution is highly nonuniform or contains local singularities/discontinuities that are spatially localized, since subdomain complexity is substantially reduced. Conversely, when the solution is regular or decomposition yields little complexity reduction, PINN is preferable due to better data utilization (Hu et al., 2021, Hu et al., 2022).
5. Extensions, Variants, and Related Frameworks
Several recent frameworks extend or generalize CPINN/XPINN:
- FBPINN: Employs overlapping subdomains combined via smooth partition of unity (“window”) functions, with explicit Schwarz-style additive/multiplicative update schedules. Overlap ensures -continuity by construction, and coarse-space correction augments scalability for higher-frequency solutions (Dolean et al., 2022, Saha et al., 2024).
- Bayesian cPINN ($PINN): Incorporates Bayesian neural networks in each subdomain, with interface continuity enforced via penalties as in CPINN. This enables local and scalable global uncertainty quantification, robust to heterogeneous noise (Figueres et al., 26 Apr 2025).
- PINN Balls: Utilizes a mixture-of-experts approach with trainable, adaptive compactly supported radial basis windowing (“balls”), soft gating, and adversarial adaptive sampling for collocation point allocation. This allows global second-order optimization with scalable memory usage (Bonfanti et al., 24 Oct 2025).
- APINN: Leverages a gating network as a soft, trainable domain partition, fully integrating data across the domain with optional parameter-sharing and entropy regularization. This method omits explicit interface penalties, relying on the gating network for smooth coupling (Hu et al., 2022).
- Discrete PINNs with Enforced Interface Constraints (EIC-dPINN): Adopts strong (hard) constraint enforcement on nonmatching meshes via finite element–style interpolation across interfaces, achieving exact continuity at the cost of preprocessing (Yin et al., 16 May 2025).
- Two-level Deep-DDM: Combines subdomain and global coarse PINNs, injecting global low-frequency solution content into each subdomain to restore scalability as the number of subdomains increases (Dolean et al., 2024).
6. Practical Applications and Empirical Performance
CPINN and XPINN have been applied across a range of scientific computing problems, including advection–diffusion, inverse elliptic and parabolic PDEs, hyperbolic conservation laws, and coupled multiphysics systems:
- Domain decomposition approaches enable direct resolution of solution discontinuities (e.g., across material interfaces or shocks) via localized subnetworks and physically consistent interface enforcement (e.g., Rankine–Hugoniot jump for hyperbolic PDEs) (Rehman et al., 5 Nov 2025, Bandai et al., 2022).
- Hybrid methods incorporating classical solvers (e.g., finite-difference as FOM in a Schwarz-PINN coupling) operate effectively in regimes (high Peclet numbers) where pure PINN couplings fail (Snyder et al., 2023).
- The parallel framework with MPI+X achieves near-linear weak scaling for large domain decompositions, processing points per second proportional to the number of GPUs or cores. Strong scaling exhibits 65–90% efficiency up to 24 subdomains, with XPINN slightly less efficient due to higher interface communication (Shukla et al., 2021).
- Bayesian CPINN ($PINN) can quantify global solution and parameter uncertainty down to 10% error for up to 15% data noise, with robust performance in heterogeneous-noise regimes and domain-size imbalance (Figueres et al., 26 Apr 2025).
The practical choice between CPINN, XPINN, and their extensions depends on the PDE structure (linear/nonlinear, conservation law, regularity, expected discontinuities), desired time and space parallelization, tolerance for hyperparameter tuning (interface weights, overlap), and hardware concurrency regime.
7. Limitations, Challenges, and Future Directions
Numerous methods-related and practical challenges persist:
- Selection and tuning of interface penalization weights is nontrivial in CPINN/XPINN; excessive or insufficient penalization can destabilize or delay convergence.
- For multiscale or high-frequency problems, slow propagation of low-frequency components across many interfaces degrades scalability; coarse-space correction or explicit global network coupling is necessary (Dolean et al., 2022, Dolean et al., 2024).
- Hard constraint or ansatz-based interface coupling (as in EIC-dPINN) eliminates penalty-weight tuning, but demands mesh/geometry preprocessing and may hinder gradient-based optimization on complex geometries (Yin et al., 16 May 2025).
- Adaptive domain decomposition (PINN Balls, APINN) can discover optimal partitions in data-driven fashion, but convergence and interpretability may depend on initialization or require careful entropy regularization (Hu et al., 2022, Bonfanti et al., 24 Oct 2025).
- Empirical studies confirm that XPINN/CPINN can outperform, match, or underperform PINN depending on the balance between data coverage and subdomain function complexity—a precise domain/protocol-aware choice is required (Hu et al., 2021).
Ongoing research seeks to hybridize mesh-based and mesh-free approaches, combine PINNs with full-order numerical solvers, and enhance uncertainty quantification for scalable multiphysics and stochastic PDEs in a domain-decomposed setting.
References:
- (Shukla et al., 2021) Parallel Physics-Informed Neural Networks via Domain Decomposition.
- (Hu et al., 2021) When Do Extended Physics-Informed Neural Networks (XPINNs) Improve Generalization?
- (Dolean et al., 2022) Finite basis physics-informed neural networks as a Schwarz domain decomposition method.
- (Hu et al., 2022) Augmented Physics-Informed Neural Networks (APINNs): A gating network-based soft domain decomposition methodology.
- (Snyder et al., 2023) Domain decomposition-based coupling of physics-informed neural networks via the Schwarz alternating method.
- (Dolean et al., 2024) Two-level deep domain decomposition method.
- (Saha et al., 2024) Towards Model Discovery Using Domain Decomposition and PINNs.
- (Figueres et al., 26 Apr 2025) \$PINN - a Domain Decomposition Method for Bayesian Physics-Informed Neural Networks.
- (Yin et al., 16 May 2025) Enforced Interface Constraints for Domain Decomposition Method of Discrete Physics-Informed Neural Networks.
- (Bonfanti et al., 24 Oct 2025) PINN Balls: Scaling Second-Order Methods for PINNs with Domain Decomposition and Adaptive Sampling.
- (Rehman et al., 5 Nov 2025) Extended Physics Informed Neural Network for Hyperbolic Two-Phase Flow in Porous Media.
- (Bandai et al., 2022) Forward and inverse modeling of water flow in unsaturated soils with discontinuous hydraulic conductivities using physics-informed neural networks with domain decomposition.