Papers
Topics
Authors
Recent
2000 character limit reached

Physics-Informed Extreme Learning Machines

Updated 12 October 2025
  • PIELMs are mesh-free neural network solvers that embed physical constraints into a single-layer network with fixed random weights for rapid PDE resolution.
  • They assemble collocation-based linear systems to enforce PDE, boundary, and initial conditions, significantly reducing training time compared to iterative PINNs.
  • Distributed PIELMs (DPIELM) enhance local accuracy by partitioning complex domains, enabling precise handling of steep gradients and discontinuities.

Physics-Informed Extreme Learning Machines (PIELMs) constitute a neural network-based paradigm for the rapid, mesh-free numerical solution of partial differential equations (PDEs). The core feature is the integration of the "extreme learning machine" philosophy—assigning all hidden-layer weights randomly and fixing them, while computing the output weights via linear algebra—within a physics-informed framework that imposes PDEs and associated boundary/initial conditions at collocation points. This yields a linear system rather than a nonconvex optimization, enabling drastically improved training speed and robustness compared to conventional Physics-Informed Neural Networks (PINNs). PIELMs have been extended to distributed frameworks for complex domains (DPIELM), and have demonstrated accuracy on par with, or exceeding, PINNs and classical numerical solvers for a broad array of stationary and time-dependent linear PDE problems, particularly in complex geometries (Dwivedi et al., 2019).

1. Core Principles and Algorithmic Structure

A PIELM constructs its neural architecture as a single hidden-layer feed-forward network:

f(χ)=i=1Nciσ(wiχ+bi)f(\chi) = \sum_{i=1}^N c_i\, \sigma\bigl(w_i \cdot \chi + b_i\bigr)

where χ\chi denotes the input variables (including space, and possibly time), (wi,bi)(w_i, b_i) are fixed, randomly assigned hidden-layer parameters, and cic_i (the output weights) are the sole trainable parameters. The PDE, boundary, and (if needed) initial conditions are evaluated at selected collocation points, leading to a set of constraints:

  • ξf=0\xi_{f} = 0 (Governing equation residual)
  • ξbc=0\xi_{bc} = 0 (Boundary condition enforcement)
  • ξic=0\xi_{ic} = 0 (Initial condition for time-dependent problems)

The resulting constraints form a linear system:

Hc=KH\,c = K

where HH aggregates neural network outputs and their derivatives evaluated at the collocation points, and KK encodes the PDE right-hand side and prescribed data. The output weights cc are computed as the unique least-squares solution using the Moore–Penrose pseudo-inverse, sidestepping the iterative, gradient-based optimization loop typical in PINN approaches.

Key algorithmic steps:

  1. Randomly sample hidden-layer weights/biases.
  2. Generate collocation points for interior, boundary, and initial constraints.
  3. Assemble HH and KK by evaluating the network and its derivatives.
  4. Solve c=HKc = H^\dagger K.

The approach generalizes for stationary and time-dependent, linear and (with appropriate modifications) nonlinear PDEs.

2. Performance Characteristics and Benchmarks

PIELMs have demonstrated the following numerical characteristics in benchmark experiments (Dwivedi et al., 2019):

  • For 1D steady advection or diffusion, errors on the order of 10410^{-4}10610^{-6} are achieved, frequently with fewer than half the collocation points required by traditional PINN-style approaches.
  • In complex 2D domains (such as irregular polygons or star-shaped regions), PIELM yields errors as low as 10710^{-7} from several thousand collocation points, with network evaluation and solution times on the order of tens of seconds.
  • Time-dependent problems (e.g., linear advection–diffusion equations) are resolved rapidly (seconds per solution), capturing propagating fronts and eliminating "false diffusion" artifacts typical of upwind finite-difference schemes.

When compared with deep PINNs:

  • PIELMs require orders of magnitude fewer parameters.
  • Training is significantly faster due to the absence of iterative optimization.
  • Accuracy matches or exceeds that of deep PINNs, especially on problems where the solution is smooth or piecewise smooth and when the domain has complex geometry.

3. Distributed and Domain Decomposition Schemes

PIELMs struggle with functions possessing steep gradients or discontinuities when relying on a single global network, a direct consequence of the global approximation properties of neural networks—whose basis functions may not sufficiently represent sharply localized features. To overcome this, the Distributed PIELM (DPIELM) extends PIELM as follows:

  • The computational domain is partitioned into subdomains ("cells").
  • Each cell hosts a local PIELM trained solely on that patch.
  • C0^0 and, if necessary, C1^1 interface constraints are imposed to enforce continuity or smoothness at cell boundaries.
  • All local solution coefficients are assembled into a larger linear system with coupling at interfaces and global constraints, which is then solved (again, using pseudo-inverse techniques).

The DPIELM strategy allows for:

  • Improved expressivity and approximation accuracy for solutions with localized, high-frequency, or discontinuous features.
  • Greater computational scalability via localized (and potentially parallelizable) training.

In challenging test cases exhibiting sharp peaks, wave packets, or near-discontinuities, DPIELM outperforms both single PIELM and deep PINN architectures.

4. Limitations, Representation Power, and Extensions

The principal limitations of PIELMs, as established in (Dwivedi et al., 2019), are:

  • Limited representational capacity for PDE solutions featuring steep gradients, boundary/internal layers, or discontinuities (e.g., for advection–diffusion with vanishing diffusion).
  • Network expressivity does not always increase by simply adding more hidden neurons if the basis remains global and lacks adaptivity.

Even deep PINNs face fundamental limitations in capturing high-frequency phenomena if the network basis is insufficiently rich or if training gets trapped in local minima. To address this, DPIELM leverages domain decomposition, but the approach still assumes that each local cell can be adequately described with a single-layer network, which may not suffice for extreme problem stiffness.

A plausible implication is that future extensions may incorporate:

  • Adaptive domain decomposition guided by a posteriori error estimates.
  • Feature enrichments in the local bases (e.g., Fourier features or domain-adaptive kernels).
  • Hybridizing with mesh-based or adaptive finite-element strategies when extreme stiffness arises.

5. Implications, Practical Use, and Broader Outlook

The adoption of the PIELM framework as a meshfree, physics-informed solver has several implications:

  • Computational efficiency and rapid solution times render PIELM attractive as a solver for both prototyping and real-time applications, particularly for linear PDEs on irregular domains where meshing is prohibitive.
  • The direct linear algebraic formulation reduces the sensitivity to hyperparameters (e.g., network width, learning rate, depth), supporting robust default configurations.
  • Being meshfree, PIELMs naturally accommodate irregular, complex, or moving geometries, as required in applications ranging from computational physics to engineering design.

A plausible avenue for future work is the extension of PIELMs beyond linear problems to broader classes of nonlinear and time-dependent PDEs—potentially incorporating linearization ("Newtonized" PIELMs) or iterative correction strategies embedded within the framework.

6. Summary Table: PIELM vs. PINN and Classical Methods

Method Optimization Main Strength Key Limitation
PINN Iterative, nonlinear Nonlinear & data-driven PDEs Slow convergence, local minima, tuning required
PIELM Direct, linear Speed, meshfree, simplicity Struggles with sharp/discontinuous solutions
DPIELM Distributed, linear Handles complex features, local adaptation Coupling increases system size, interface constraint setup

7. Future Prospects

The methodology introduced in (Dwivedi et al., 2019) sets the stage for:

  • The development of scalable neural PDE solvers for complex domains and multiphysics coupling.
  • Accelerated, mesh-free solvers for both forward and inverse PDE problems where rapid turnaround or flexibility is desired.
  • Potential generalization to nonlinear problems by leveraging time-stepping, operator splitting, or locally adaptive bases—subject to current challenges in network representation and solution regularity.

The approach points toward a unifying perspective, blending strong physics-informed constraints, rapid extreme learning paradigms, and distributed approximation theory, and is an active area of research for wider applications in computational science and engineering.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Physics-Informed Extreme Learning Machines (PIELMs).