Papers
Topics
Authors
Recent
Search
2000 character limit reached

MoPINNEnKF Framework for Robust PDE Inference

Updated 20 March 2026
  • MoPINNEnKF is an iterative multi-objective PINN approach that integrates NSGA-III and the ensemble Kalman filter to handle noisy data and missing physics.
  • It employs evolutionary search to generate a Pareto-optimal ensemble of neural networks, balancing losses from residuals, initial/boundary conditions, and data fit.
  • The iterative framework refines predictions through data assimilation, significantly reducing errors in forward and inverse PDE tasks under high noise conditions.

The MoPINNEnKF framework is an iterative multi-objective physics-informed neural network (PINN) methodology that integrates non-dominated sorting genetic algorithm III (NSGA-III)-based multi-objective ensemble search with the ensemble Kalman filter (EnKF) for robust inference in partial differential equation (PDE) forward and inverse problems. It is specifically designed to address the challenges posed by noisy observational data and missing physics, which often hinder the performance of conventional PINN methods. By composing an ensemble of Pareto-optimal PINNs and coupling this ensemble with iterative denoising and data assimilation, MoPINNEnKF achieves significant improvements in both accuracy and robustness for model-based inference under uncertainty (Lu et al., 31 May 2025).

1. Physics-Informed Neural Network Foundation

MoPINNEnKF builds on the standard PINN paradigm, where the neural network parameterization u^(x,t;θ)\hat u(x,t;\theta) approximates the solution u(x,t)u(x,t) of a PDE, typically expressed as: ut+N[u]=0,xΩ,t[0,T].u_t + \mathcal{N}[u] = 0, \quad x \in \Omega,\, t \in [0,T]. The loss function minimized during PINN training is a weighted sum: minθ{ωicLic(θ)+ωbcLbc(θ)+ωresLres(θ)+ωdataLdata(θ)},\min_\theta \Bigl\{ \omega_{ic}\,\mathcal{L}_{ic}(\theta) + \omega_{bc}\,\mathcal{L}_{bc}(\theta) + \omega_{res}\,\mathcal{L}_{res}(\theta) + \omega_{data}\,\mathcal{L}_{data}(\theta) \Bigr\}, where losses enforce PDE residual, initial/boundary conditions, and data fit, each measured via mean-squared error. This enables the neural network to encode not only observational data but also the mathematical structure of the governing physics.

2. Multi-Objective Ensemble via NSGA-III

Rather than collapsing all losses into a scalar objective, MoPINNEnKF employs a genuinely multi-objective approach using NSGA-III optimization. Each term (Lres,Lic,Lbc,Ldata\mathcal{L}_{res},\mathcal{L}_{ic},\mathcal{L}_{bc},\mathcal{L}_{data}) is treated as a separate objective: minθΘ(Lres(θ),Lic(θ),Lbc(θ),Ldata(θ)).\min_{\theta\in\Theta} \left( \mathcal{L}_{res}(\theta), \mathcal{L}_{ic}(\theta), \mathcal{L}_{bc}(\theta), \mathcal{L}_{data}(\theta) \right). NSGA-III evolves a population of NN candidate networks through crossover, mutation, and non-dominated sorting, assembling an ensemble of {θl}l=1Ns\{\theta_l\}_{l=1}^{N_s} Pareto-optimal solutions distributed along the optimal trade-off front. This ensemble embodies diverse balances between physics, boundary, and data objectives as prior samples for subsequent data assimilation.

3. Ensemble Kalman Filter-Based Data Assimilation

The EnKF component of MoPINNEnKF assimilates noisy data using the Pareto ensemble. For each network θi\theta_i, the PINN prediction at observation points forms the state vector x(i)x^{(i)}. Observation operators are typically identity matrices, with observations y(i)=Hx(i)+η(i)y^{(i)} = Hx^{(i)} + \eta^{(i)} given noise η(i)N(0,R)\eta^{(i)} \sim \mathcal{N}(0,R). The Kalman gain, forecast mean, and covariances are computed, followed by the EnKF update: xa(i)=x(i)+K(y(i)Hx(i)),x^{(i)}_a = x^{(i)} + K(y^{(i)} - Hx^{(i)}), yielding an updated "analysis" ensemble. Aggregating the analysis outputs forms a filtered dataset, denoised and assimilated for subsequent PINN retraining.

4. Iterative MoPINNEnKF Workflow

MoPINNEnKF proceeds in an iterative loop:

  1. Initialize PINN loss as in the canonical weighted sum formulation.
  2. Ensemble Generation: Apply NSGA-III for SS steps to create {θl(1)}\{\theta_l^{(1)}\}.
  3. EnKF Update: Evaluate networks at observation locations, apply EnKF to produce filtered data, D(1)~\widetilde{\mathcal{D}^{(1)}}.
  4. Loss Refinement: Update the PINN's data loss to match filtered data:

Ldata(1)(θ)=1Nobsk=1Nobs(u^(xk,tk;θ)u~k(1))2.\mathcal{L}_{data}^{(1)}(\theta) = \frac{1}{N_{\rm obs}} \sum_{k=1}^{N_{\rm obs}} \left(\hat u(x_k,t_k;\theta) - \widetilde u^{(1)}_k\right)^2.

  1. Repeat: Re-run NSGA-III and EnKF using updated data loss and filtered data, until convergence is detected via:

1Nobsk=1Nobsu^(xk,tk;θ(m+1))u^(xk,tk;θ(m))2<ϵiter.\frac{1}{N_{\rm obs}} \sum_{k=1}^{N_{\rm obs}} \left| \hat u(x_k,t_k;\theta^{(m+1)}) - \hat u(x_k,t_k;\theta^{(m)}) \right|^2 < \epsilon_{\rm iter}.

This iterative filtering and retraining loop denoises corrupt data and incrementally improves the solution.

5. Benchmark Problem Applications

MoPINNEnKF's effectiveness is demonstrated on two canonical problems:

  • One-dimensional viscous Burgers equation: A nonlinear PDE with Dirichlet boundaries and an initial condition. Forward tests simulate parameter misspecification in the PINN (incorrect viscosity ν\nu), while inverse tests treat ν\nu as a trainable parameter. Noise is applied to data at levels η=20%,50%,80%\eta=20\%,50\%,80\%.
  • Time-fractional mixed diffusion-wave equation (TFMDWE): Uses the Caputo fractional derivative, with zero Dirichlet/initial conditions and composite source term dependent on an unknown α\alpha. The inverse problem seeks to estimate true α=0.5\alpha=0.5.

For both, MoPINNEnKF is applied under substantial observational noise, simulating realistic data corruption and missing physics.

6. Empirical Performance and Comparative Analysis

MoPINNEnKF demonstrates consistent and substantial performance gains over both plain Adam-trained PINN (ADAM-PINN) and the NSGA-III-only PINN (NSGA-III-PINN). Selected results for the Burgers forward problem (mean squared error metrics across noise levels) are as follows:

Model 20% noise 50% noise 80% noise
ADAM-PINN 1.4×1031.4 \times 10^{-3} 2.9×1032.9 \times 10^{-3} 3.6×1033.6 \times 10^{-3}
NSGA-III-PINN 1.6×1031.6 \times 10^{-3} 2.4×1032.4 \times 10^{-3} 3.3×1033.3 \times 10^{-3}
MoPINNEnKF 0.6×1030.6 \times 10^{-3} 0.9×1030.9 \times 10^{-3} 1.4×1031.4 \times 10^{-3}

On inverse problems (recovering parameters like ν\nu, α\alpha), MoPINNEnKF achieves L1L^1 errors around half or less than other benchmarks for η50%\eta\leq50\%, and maintains best performance for higher noise barring extremely poor data. In TFMDWE cases, mean absolute error and parameter L1L^1 errors in both forward and inverse settings are reduced by factors of $2$–$3$ over competitor methods—particularly marked at moderate and high (80%\leq 80\%) noise levels.

7. Synthesis and Theoretical Implications

MoPINNEnKF exemplifies a principled hybridization of multi-objective evolutionary search and ensemble Bayesian data assimilation. NSGA-III’s Pareto-ensemble supports model diversity and robust trade-off exploration, while EnKF’s assimilation capabilities permit effective denoising and incremental correction for imperfect data and missing physics. Iterative loss refinement based on filtered data results in PINN models that not only generalize better under noise but also cope with incorrect prior physical assumptions and underdetermined inverse settings. A plausible implication is that MoPINNEnKF's design may suggest pathways for other uncertainty-quantification and multimodal data assimilation methods in PINN-based PDE inference (Lu et al., 31 May 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MoPINNEnKF Framework.