- The paper introduces a unified weighted-loss PINN that decomposes the solution into regular and singular components to automatically capture boundary layers.
- Numerical experiments show stable mean relative L2 and L∞ errors of order 10⁻⁷ to 10⁻⁸ even for perturbation parameters as small as 10⁻¹⁰.
- The method balances stiff terms via weighted residuals, enabling mesh-free and scalable resolution of complex boundary-layer phenomena in both regular and irregular domains.
Unified Weighted-Loss PINN Framework for Boundary Layer Problems in Singularly Perturbed PDEs
Motivation and Context
Singularly perturbed PDEs, encountered in applications such as magnetohydrodynamics, reaction-transport systems, and electrostatics, feature boundary layers with steep gradients resulting from small perturbation parameters multiplying highest-order derivatives. Standard numerical discretizations and neural-network-based solvers often fail in such regimes due to loss of accuracy and numerical instability, especially when the mesh or model architecture cannot resolve the layers. Recent advances in PINNs and operator-learning networks achieve strong performance for smooth problems, but stiff multiscale features—especially boundary layers—result in ill-conditioned optimization landscapes and convergence failure.
Classical remedies leverage asymptotic expansions and specialized mesh designs requiring heavy a priori information and ad hoc configurations. Machine learning approaches, such as BL-PINN and SD-PINN, improve layer modeling but demand multiple networks, matching conditions, or complex residual coupling, and their reliance on explicit layer location and structure severely limits robustness, scalability, and applicability.
Methodological Innovations
The paper introduces a unified learning framework based on a weighted loss PINN formulation that requires only knowledge of boundary layer thickness—not location or detailed asymptotic profile. The proposed decomposition recasts the solution as a sum of regular and singular components, each modeled by neural networks and defined globally over the domain. The singular component is parameterized via sigmoidal activations and level-set functions reflecting rapid spatial transitions near boundaries. This ansatz automatically enables layer localization during training, eliminating the need for hard-coded a priori information.
For one-dimensional problems, the singular parts are indexed by level-set functions associated with domain boundaries and embedded via scaling by ϵ. In two-dimensional settings, the singular component is constructed via separable, multiplicative combinations aligned with each coordinate boundary, capturing corner-layer phenomena. For irregular domains, a general level-set function parameterizes the layer, allowing mesh-free applicability.
The key innovation is the weighted loss function that scales residual contributions so that stiff terms are balanced at O(1) magnitude throughout the domain. In boundary-layer regions, diffusive and convective terms can dominate, and weights tailored to the proximity to the boundary maintain equitable optimization focus across layers and bulk regions. The framework is thus robust to perturbation parameters as small as 10−10, without explicit asymptotic matching or architectural modification.
Implementation Details
The decomposition is implemented by constructing neural network architectures where each component (regular or singular) is approximated by an independent fully connected network block. Inputs include spatial coordinates and appropriately scaled level-set variables. All residuals are computed via automatic differentiation, preventing loss of precision even for extreme stiffness.
Sampling strategies distribute collocation points densely in boundary-layer neighborhoods via truncated normal distributions with standard deviation O(ϵ), and uniformly in the interior, ensuring both smooth and sharp features are resolved. Boundary points are similarly sampled to enforce Dirichlet or other boundary constraints.
Optimization proceeds via the Levenberg–Marquardt algorithm, which is well suited for moderate parameter count and nonlinear least-squares formulations implied by the PINN residuals. The weighted loss naturally ensures that layer and bulk regions are treated with balanced significance, mitigating optimization imbalance.
Numerical Results
The authors conduct extensive experiments on scalar and system PDEs (reaction-diffusion, convection-diffusion-reaction, nonlinear Poisson–Boltzmann, and coupled systems), on both regular and irregular domains. Across all tested equations, the method consistently achieves mean relative L2 and L∞ errors of order 10−7 to 10−8 for perturbation parameters as small as 10−10.
Key findings include:
- Boundary layer identification via training: Without explicit location information, the singular components activate only in boundary neighborhoods where rapid solution change is dictated by the PDE residual.
- No accuracy degradation as ϵ decreases: Across multiple orders of magnitude in O(1)0, solution errors remain stable, with no observed growth or instability even for extreme stiffness.
- Robust extension to systems: The method accurately resolves coupled variables with potentially differing boundary layer structures, maintaining balanced accuracy across all solution components, even in non-diagonalizable systems.
- Applicability to irregular domains: The mesh-free level-set approach supports accurate solution recovery under arbitrary boundary shapes and nonlinearities.
- Weighted loss enables optimization stability: By balancing dominant stiff terms, rounding error and optimization difficulty are greatly reduced relative to standard PINN or operator network approaches.
Theoretical and Practical Implications
The presented methodology demonstrates that multiscale difficulties underlying boundary-layer phenomena in singularly perturbed PDEs can be tackled by optimization-level strategies, not solely by specialized solution representations or explicit asymptotic decompositions. Weighted loss functions can encode physical scaling and stiffness implicitly, thus streamlining architecture design and broadening applicability. Automatic layer detection via training ensures solution recovery without human intervention, paving the way for practical deployment in complex settings where prior knowledge is limited.
The approach establishes a general principle: for challenging stiff PDEs, optimization balancing through loss weighting is as critical as representation power. The framework is flexible, extensible, and capable of handling nonlinearities, systems, and complex geometries.
Future Directions
Several open problems and research avenues are suggested:
- Extending to PDEs with interior layers or more general multiscale phenomena, beyond strictly boundary-layer problems.
- Analyses of optimization dynamics under weighted-loss regimes, including convergence guarantees and correlation between loss minimization and solution error.
- Integration with scalable operator-learning architectures for parameterized families of PDEs, potentially leveraging meta-learning or transfer learning.
- Application to three-dimensional domains, where computational complexity and optimization stability pose additional challenges.
Conclusion
This paper offers a robust and unified weighted-loss PINN framework for singularly perturbed PDEs with boundary layer phenomena. The method achieves high accuracy and stable optimization across diverse equation classes, domains, and perturbation scales, without explicit knowledge of layer locations or specialized architectural modification. Theoretical insight and numerical evidence underscore the potential for weighted optimization strategies to enable practical and scalable neural-network-based solvers for stiff, multiscale PDEs (2603.29249).