Domain Decomposition PINNs
- Domain Decomposition PINNs are neural network-based solvers that partition PDE domains into overlapping or disjoint subregions to enhance local learning and approximation accuracy.
- They utilize techniques such as Schwarz preconditioning, layerwise splits, and interface continuity constraints to accelerate convergence and reduce errors compared to monolithic approaches.
- Adaptive decomposition, parallel processing, and hybrid optimization strategies offer robust scalability and significant speed-up, making them ideal for stiff, multiscale, and complex PDE problems.
Domain Decomposition Physics-Informed Neural Networks (PINNs) are a class of neural-network-based solvers for partial differential equations (PDEs) and related forward/inverse problems that employ explicit spatial or parameter-space decomposition to enhance both optimization efficiency and approximation accuracy. Unlike monolithic PINN architectures that struggle with multiscale phenomena, spectral stiffness, or computational scaling for large domains, domain decomposition techniques permit localized learning, interface continuity enforcement, parallelism, and tailored treatment of complex solution features.
1. Mathematical Principles and Formulations
Domain decomposition PINNs split the set of network parameters (or the physical domain, or the joint space-time domain) into disjoint or overlapping subdomains. Each subdomain is assigned a distinct neural network, or a subset of parameters, which is trained either independently or with well-defined inter-domain constraints.
A prototypical example is the layerwise Schwarz preconditioning approach (Kopaničáková et al., 2023), where the global PINN parameter vector is partitioned into groups: with restriction and extension operators , picking out and embedding blocks. The training objective for domain-decomposed optimization is formulated as
where is the mean-squared PDE residual. Rather than applying L-BFGS directly to , one introduces a right-preconditioner based on Schwarz decomposition, yielding
Additive preconditioning solves local optimizations for each , updates blocks in parallel, and aggregates them into the full parameter vector, while multiplicative preconditioning sweeps sequentially through subdomains.
Other notable formulations include FBPINNs (Moseley et al., 2021, Dolean et al., 2022), which express the solution as a sum over smooth window functions and local networks: where have compact support and are neural approximations, and all windows overlap to ensure continuity.
2. Layerwise, Spatial, and Space-Time Decomposition Strategies
Modern domain decomposition PINNs employ far more than simple spatial partitioning. The principle has been extended to:
- Layerwise parameter splits: Each neural network layer is viewed as a subdomain in parameter space; local solves are defined by holding all other layers fixed.
- Classical spatial subdomains: Conservative PINNs (cPINNs) and interface PINNs assign solution branches and physics to spatial regions, enforcing solution and flux continuity at interfaces (Shukla et al., 2021, Roy et al., 2024).
- Space-time decomposition: XPINNs and similar architectures divide the augmented domain into tensor-product blocks or arbitrary-shaped partitions, each covered by a dedicated network (Shukla et al., 2021). Continuity is enforced at the interfaces of Cartesian or non-Cartesian time-space slabs.
- Adaptive domain construction: Recent adaptive-basis PINNs (AB-PINNs) introduce new subdomains on-the-fly in regions of high residual loss—dynamically modifying the decomposition in response to solution features (Botvinick-Greenhouse et al., 10 Oct 2025).
Preconditioning and parallelization efficiency is strongly influenced by the choice of decomposition; e.g., maximal splitting (one layer per subdomain, one network per spatial block) generally yields the best accuracy and convergence speed (Kopaničáková et al., 2023).
3. Training Algorithms and Interface Constraints
Layerwise Schwarz PINNs combine two-stage iteration with L-BFGS acceleration (Kopaničáková et al., 2023):
- Step 1: Local nonlinear preconditioning. In additive schemes (ASPQN), solve for all independently/parallel, aggregate with extension operator:
- Step 2: Global L-BFGS step. Use updated to build a secant-based Hessian (quasi-Newton) step for the full parameter vector.
FBPINNs and XPINNs, as well as interface and adaptive PINNs (Shukla et al., 2021, Roy et al., 2024, Botvinick-Greenhouse et al., 10 Oct 2025), define interface constraints via:
- Solution continuity () and flux-matching () penalties over collocation points on interfaces.
- Residual continuity for non-conservative problems: loss terms penalize jumps in the PDE residual across interfaces.
- Partition-of-unity blending: smooth basis functions ensure continuity without additional explicit constraints.
Schwarz-style iteration (alternating or hybrid coloring) can be combined with interface data exchanges for robust convergence in overlapping subdomain settings (Snyder et al., 2023). PINN-FOM hybrids leverage high-fidelity classical solvers in some subdomains, interfacing with PINN branches for challenging solution regions.
4. Parallelization and Computational Scalability
Domain decomposition PINNs readily exploit parallelism. The additive Schwarz approach supports embarrassingly parallel local solves (each GPU processes one layer or subdomain), with only two collective communications per iteration—one all-gather of local updates, and one broadcast of global parameters (Kopaničáková et al., 2023). In practical benchmarks, ASPQN achieved 20–40× speed-up over single-GPU L-BFGS, with near-linear scaling up to at least 8–16 GPUs.
Hybrid MPI + X implementations map each subdomain to an MPI rank; within each, the local neural network runs on a CPU or GPU (Shukla et al., 2021). Communication volume is kept minimal by exchanging only interface buffer values rather than full parameter vectors.
Two-level multiscale decomposition (e.g., multilevel FBPINNs, Deep-DDM) combine a coarse global network with many fine local networks. This architecture restores strong/weak scalability and propagates global information efficiently for high-frequency problems (Dolean et al., 2024, Dolean et al., 2023), with wall-time improvements for large numbers of subdomains.
Discrete PINNs with enforced interface constraints (EIC-dPINN) (Yin et al., 16 May 2025) apply mesh-based Gaussian quadrature for energy evaluation, decouple subdomains with hard displacement replaces at interfaces, and support nonmatching meshes, yielding robust parallel scaling even for complex 3D systems.
5. Empirical Performance and Convergence Across Problem Classes
Extensive numerical results confirm that domain decomposition PINNs dominate standard monolithic PINNs on multiple axes:
- Accuracy improvements: ASPQN/MSPQN reduced relative error by up to order-of-magnitude compared to standard L-BFGS PINNs (typical improved from to –) for Burgers’, advection–diffusion, Klein–Gordon, Allen–Cahn, and parameter discovery ODE problems (Kopaničáková et al., 2023, Saha et al., 2024).
- Training time: Single-GPU MSPQN up to faster than L-BFGS, ASPQN up to faster on multi-GPU clusters, for identical error levels.
- Robustness to data sparsity/noise: FBPINNs maintain low error and parameter bias even when training data covers only quasi-stationary regimes or carries nontrivial noise (Heinlein et al., 2024, Saha et al., 2024).
- Adaptive basis: AB-PINNs adaptively refine subdomains where error persists, achieving errors for challenging Helmholtz problems, outperforming both static FBPINNs and monolithic PINNs by up to (Botvinick-Greenhouse et al., 10 Oct 2025).
- Scalability: Multi-level decompositions avert the loss of global information transfer and accuracy as subdomain count grows; e.g., multilevel FBPINNs remain accurate and efficient for strong/weak-scaling tests up to or more, where monolithic PINNs fail (Dolean et al., 2023, Dolean et al., 2024).
Empirically, the choice of subdomain count and overlap must be tuned: maximal splitting and moderate overlap yield the best trade-offs for stiff/multiscale PINNs (Kopaničáková et al., 2023, Moseley et al., 2021). Coarse-level correction is critical for retaining global solution properties at high subdomain counts (Dolean et al., 2022).
6. Extensions, Variants, and Practical Implementation
Multiple domain decomposition PINN variants address specialized modeling challenges:
- Schwarz preconditioners for quasi-Newton optimizers: ASPQN and MSPQN for L-BFGS acceleration (Kopaničáková et al., 2023).
- Finite-basis and extreme-learning machine linearization: ELM-FBPINNs deliver PINN-level accuracy through direct sparse linear solvers at dramatically reduced computational cost for linear PDEs (Anderson et al., 2024).
- Interface-aware models: Adaptive-slope activation functions in AdaI-PINNs eliminate hand-tuning for interface PDEs and outperform earlier I-PINNs on cost and accuracy (Roy et al., 2024).
- Partition-of-unity mixtures and unsupervised domain identification: POU-PINNs learn both domain decompositions and physics parameters, discovering subdomains for heterogeneous physics without labels, converging with up to error (Rodriguez et al., 2024).
- Bayesian domain-decomposition PINNs: $PINN uses local BPINNs per subdomain, aggregates uncertainty via probabilistic coupling at interfaces, yielding scalable, uncertainty-aware PINN solvers suitable for multi-scale, noisy PDE problems (Figueres et al., 26 Apr 2025).
Recommended practices include hard-constraint enforcement for boundary/interface conditions where possible, local input normalization for high-frequency features, per-subdomain network tuning, and two-stage (local then global) optimization. Communication in parallel settings should be restricted to interface buffers, with per-subdomain optimization performed asynchronously.
7. Applications, Limitations, and Outlook
Domain decomposition PINNs have demonstrated effectiveness in:
- Multiphysics and multiscale PDEs (fluid flow, wave propagation, electromagnetics, porous media).
- Forward and inverse parameter discovery, particularly when dynamics are stationary or data is sparse.
- Realistic complex geometries (e.g., U.S. map inverse-diffusion with XPINNs (Shukla et al., 2021)).
- Hybrid mesh-based/flexible partitioning (PINN-FEM, EIC-dPINN) enable exact imposition of strong boundary/interface conditions, supporting nonconforming meshes and large-scale parallelism (Sobh et al., 14 Jan 2025, Yin et al., 16 May 2025).
Domain decomposition is most effective when solutions are stiff, high-frequency, or multi-modal; in well-conditioned, smooth problems, monolithic PINNs and even shallow networks may suffice. Limitations remain for entirely non-overlapping decompositions where information transfer is bottlenecked, and for problems where interface conditions are ambiguous or challenging to formulate.
Recent advances point toward adaptive, unsupervised decomposition (AB-PINNs, POU-PINNs), robust uncertainty quantification ($PINN), and staged hybrid learning (D3PINNs) as key avenues for scalable, accurate, and reliable scientific machine learning using PINNs.
References: (Kopaničáková et al., 2023, Shukla et al., 2021, Moseley et al., 2021, Dolean et al., 2022, Roy et al., 2024, Botvinick-Greenhouse et al., 10 Oct 2025, Heinlein et al., 2024, Dolean et al., 2023, Anderson et al., 2024, Rodriguez et al., 2024, Dolean et al., 2024, Snyder et al., 2023, Yin et al., 16 May 2025, Sobh et al., 14 Jan 2025, Figueres et al., 26 Apr 2025, Saha et al., 2024, Nohra et al., 2024, Bhargava et al., 2024).