MoPINNEnKF Framework for Robust PDE Inference
- MoPINNEnKF is an iterative multi-objective PINN approach that integrates NSGA-III and the ensemble Kalman filter to handle noisy data and missing physics.
- It employs evolutionary search to generate a Pareto-optimal ensemble of neural networks, balancing losses from residuals, initial/boundary conditions, and data fit.
- The iterative framework refines predictions through data assimilation, significantly reducing errors in forward and inverse PDE tasks under high noise conditions.
The MoPINNEnKF framework is an iterative multi-objective physics-informed neural network (PINN) methodology that integrates non-dominated sorting genetic algorithm III (NSGA-III)-based multi-objective ensemble search with the ensemble Kalman filter (EnKF) for robust inference in partial differential equation (PDE) forward and inverse problems. It is specifically designed to address the challenges posed by noisy observational data and missing physics, which often hinder the performance of conventional PINN methods. By composing an ensemble of Pareto-optimal PINNs and coupling this ensemble with iterative denoising and data assimilation, MoPINNEnKF achieves significant improvements in both accuracy and robustness for model-based inference under uncertainty (Lu et al., 31 May 2025).
1. Physics-Informed Neural Network Foundation
MoPINNEnKF builds on the standard PINN paradigm, where the neural network parameterization approximates the solution of a PDE, typically expressed as: The loss function minimized during PINN training is a weighted sum: where losses enforce PDE residual, initial/boundary conditions, and data fit, each measured via mean-squared error. This enables the neural network to encode not only observational data but also the mathematical structure of the governing physics.
2. Multi-Objective Ensemble via NSGA-III
Rather than collapsing all losses into a scalar objective, MoPINNEnKF employs a genuinely multi-objective approach using NSGA-III optimization. Each term () is treated as a separate objective: NSGA-III evolves a population of candidate networks through crossover, mutation, and non-dominated sorting, assembling an ensemble of Pareto-optimal solutions distributed along the optimal trade-off front. This ensemble embodies diverse balances between physics, boundary, and data objectives as prior samples for subsequent data assimilation.
3. Ensemble Kalman Filter-Based Data Assimilation
The EnKF component of MoPINNEnKF assimilates noisy data using the Pareto ensemble. For each network , the PINN prediction at observation points forms the state vector . Observation operators are typically identity matrices, with observations given noise . The Kalman gain, forecast mean, and covariances are computed, followed by the EnKF update: yielding an updated "analysis" ensemble. Aggregating the analysis outputs forms a filtered dataset, denoised and assimilated for subsequent PINN retraining.
4. Iterative MoPINNEnKF Workflow
MoPINNEnKF proceeds in an iterative loop:
- Initialize PINN loss as in the canonical weighted sum formulation.
- Ensemble Generation: Apply NSGA-III for steps to create .
- EnKF Update: Evaluate networks at observation locations, apply EnKF to produce filtered data, .
- Loss Refinement: Update the PINN's data loss to match filtered data:
- Repeat: Re-run NSGA-III and EnKF using updated data loss and filtered data, until convergence is detected via:
This iterative filtering and retraining loop denoises corrupt data and incrementally improves the solution.
5. Benchmark Problem Applications
MoPINNEnKF's effectiveness is demonstrated on two canonical problems:
- One-dimensional viscous Burgers equation: A nonlinear PDE with Dirichlet boundaries and an initial condition. Forward tests simulate parameter misspecification in the PINN (incorrect viscosity ), while inverse tests treat as a trainable parameter. Noise is applied to data at levels .
- Time-fractional mixed diffusion-wave equation (TFMDWE): Uses the Caputo fractional derivative, with zero Dirichlet/initial conditions and composite source term dependent on an unknown . The inverse problem seeks to estimate true .
For both, MoPINNEnKF is applied under substantial observational noise, simulating realistic data corruption and missing physics.
6. Empirical Performance and Comparative Analysis
MoPINNEnKF demonstrates consistent and substantial performance gains over both plain Adam-trained PINN (ADAM-PINN) and the NSGA-III-only PINN (NSGA-III-PINN). Selected results for the Burgers forward problem (mean squared error metrics across noise levels) are as follows:
| Model | 20% noise | 50% noise | 80% noise |
|---|---|---|---|
| ADAM-PINN | |||
| NSGA-III-PINN | |||
| MoPINNEnKF |
On inverse problems (recovering parameters like , ), MoPINNEnKF achieves errors around half or less than other benchmarks for , and maintains best performance for higher noise barring extremely poor data. In TFMDWE cases, mean absolute error and parameter errors in both forward and inverse settings are reduced by factors of $2$–$3$ over competitor methods—particularly marked at moderate and high () noise levels.
7. Synthesis and Theoretical Implications
MoPINNEnKF exemplifies a principled hybridization of multi-objective evolutionary search and ensemble Bayesian data assimilation. NSGA-III’s Pareto-ensemble supports model diversity and robust trade-off exploration, while EnKF’s assimilation capabilities permit effective denoising and incremental correction for imperfect data and missing physics. Iterative loss refinement based on filtered data results in PINN models that not only generalize better under noise but also cope with incorrect prior physical assumptions and underdetermined inverse settings. A plausible implication is that MoPINNEnKF's design may suggest pathways for other uncertainty-quantification and multimodal data assimilation methods in PINN-based PDE inference (Lu et al., 31 May 2025).