Bump-PINNs: Sparse Meshless PDE Solvers
- Bump-PINNs are sparse, meshless neural network models that solve PDEs with explicitly parameterized, localized bump basis functions.
- They feature h-adaptivity and dynamic pruning to reduce redundant bumps, enhancing training speed and model interpretability.
- Empirical benchmarks demonstrate 10–100× parameter reduction and 2–20× faster training compared to conventional PINN architectures.
Bump-PINNs (BumpNet Physics-Informed Neural Networks) represent a family of sparse meshless neural network models designed for the solution of partial differential equations (PDEs) via physics-informed learning. These frameworks employ a novel, explicitly parameterized expansion in highly localized, trainable “bump” basis functions constructed from tanh-sigmoid activations, replacing the standard monolithic multilayer perceptrons (MLPs) used in conventional PINNs. Bump-PINNs achieve h-adaptivity, parsimony, and interpretability, offering significant improvements in parameter efficiency, training speed, and solution fidelity—especially for problems with localized features and boundary layers (Chiu et al., 19 Dec 2025).
1. Construction and Parameterization of Bump Basis Functions
The core architectural innovation of Bump-PINNs is the introduction of sparse, compactly supported bumps as parameterized nonlinear basis functions. In two dimensions, the BumpNet expansion is given by
where each is formed via a sigmoidal composition ensuring approximately rectangular, localized support. Each bump is constructed as:
where %%%%3%%%% is the amplitude, controls the sharpness, the orientation, and define support boundaries via half-spaces. These parameters are trainable and enable closed-form determination of bump width, , and center, , allowing explicit control of support and ensuring positivity and domain constraints. Generalization to dimensions is achieved using $2n$ half-spaces, with normals determined via Gram–Schmidt orthogonalization and analogous squash functions.
For domain-constrained centers and strictly positive widths, unconstrained trainable variables parameterize
with support boundaries then recovered algebraically.
2. Solution Representation, Loss Function, and Training Procedure
In Bump-PINNs, the solution to the PDE (or in time-dependent cases) is expressed as a sum over bumps:
where denotes the full set of trainable parameters involved in the definition of bumps. Training follows a standard collocation-based PINN approach: points for the PDE residual, and (plus initial points in time-dependent problems), are sampled. The total loss is
where all necessary derivatives are computed by autodifferentiation of the bump expansion. For initial conditions , an additional loss term is included.
3. h-Adaptivity and Dynamic Pruning of Basis Functions
Bump-PINNs implement a dynamic bump-pruning mechanism for h-adaptivity—the on-the-fly removal of superfluous bumps with negligible amplitudes, based on a threshold : after every optimization steps, all bumps with are pruned. This directly reduces model size, mitigates overfitting, and empirically accelerates convergence by eliminating flat directions from the optimization landscape. This adaptive capability is instrumental in concentrating model capacity where sharp PDE features occur, efficiently allocating resources and enabling parameter parsimony and interpretability.
4. Empirical Performance: Benchmarks and Comparative Metrics
Comprehensive experiments on canonical PDE benchmarks demonstrate that Bump-PINNs routinely outperform traditional PINN architectures in parameter efficiency, training speed, and accuracy. Key results include:
- 2D Helmholtz equation: Bump-PINN with a initial bump grid ( parameters) achieved relative error in 50 s, versus a comparable MLP-PINN requiring over 15,000 parameters and 80 s.
- 2D Poisson equation: A bump grid ( parameters) reached MSE in 60 s—20× faster than SPINN, with similar accuracy.
- 1D space-time heat equation: bumps ($840$ parameters) attained in 12 s, compared to MLP-PINN ( parameters, 40 s). SPINN failed without additional finite difference augmentation.
- High-speed 1D advection equation: Bump-SAPINN (self-adaptive, bumps, 154 parameters) succeeded where standard PINNs broke down; in 28 s.
Across all benchmarks, Bump-PINNs achieved 10–100× parameter reduction and 2–20× faster training relative to state-of-the-art PINN variants, while capturing sharp interfaces and boundary layers that challenge MLP architectures (Chiu et al., 19 Dec 2025).
Summary Table of Key Results
| PDE | Method | Parameter Count | Relative Error / MSE | Training Time |
|---|---|---|---|---|
| 2D Helmholtz | Bump-PINN | 280 | 50 s | |
| MLP-PINN | 15,000 | comparable | 80 s | |
| 2D Poisson | Bump-PINN | 252 | 60 s | |
| SPINN | 252 | similar | 1,200 s | |
| 1D Space-Time Heat | Bump-PINN | 840 | 12 s | |
| MLP-PINN | 3,000 | similar | 40 s |
5. Interpretability, Control, and Relationship to Existing PINN Frameworks
Bump-PINNs provide explicit, interpretable correspondence between model parameters and the geometry (center, size, orientation) of each basis function, contrasting with opaque internal representations in deep MLPs. This enables direct control over support localization and facilitates domain-constrained solution construction. Closed-form parameter-to-geometry relationships allow methodical enforcement of positivity, explicit center placement, and h-refinement. Existing neural architectures for operator learning (e.g., DeepONets) and evolutionary PDE solvers (EDNNs) can be hybridized with BumpNets by substituting the regression network with a BumpNet module, as in Bump-DeepONet and Bump-EDNN variants (Chiu et al., 19 Dec 2025).
6. Distinction from Other “B-PINN” Frameworks and Uncertainty Quantification
Care must be taken to distinguish Bump-PINNs from “Bayesian Physics-Informed Neural Networks” (B-PINNs) or “Error-Aware B-PINNs” as in (Graf et al., 2022), where “B-PINNs” denotes a Bayesian PINN with uncertainty quantification:
- Bump-PINNs (Chiu et al., 19 Dec 2025): Sparse, meshless, interpretable, and dynamically pruned expansions in parameterized bump basis functions for parsimony and localized approximation.
- B-PINNs (Graf et al., 2022): Bayesian neural networks for PDEs, with uncertainty quantification that incorporates epistemic (weight) and pseudo-aleatoric (residual-derived) error bounds, but without meshless or bump basis expansions.
The Bump-PINN approach is algorithmically and architecturally distinct from error-aware Bayesian PINNs (see summary of Bayesian PINN mechanisms and uncertainty quantification in (Graf et al., 2022)), though a plausible implication is that the parsimonious architecture of Bump-PINNs could facilitate improved uncertainty quantification by reducing overfitting and providing interpretable, localized basis functions.
7. Outlook and Future Research Directions
Bump-PINNs demonstrate advantages in terms of meshless adaptivity, interpretability, and computational efficiency, and serve as a general framework adaptable to various physics-informed and operator-learning paradigms. Potential extensions noted in (Chiu et al., 19 Dec 2025) include combinations with evolutionary deep neural networks for time-dependent PDEs (Bump-EDNN), incorporation as regression backbones in DeepONet architectures (Bump-DeepONet), and further development of dynamic pruning and geometric control techniques in higher dimensions. Robustness and accuracy on a broad suite of physically relevant benchmarks suggest a promising direction for research on interpretable, adaptive neural PDE solvers.