Geometry-Parameterized Dual-Encoder PINN
- The paper introduces GP-DE-PINN, a novel framework that separates geometric and physical encodings to accurately reconstruct unsteady flow fields and infer pressure without direct supervision.
- It employs dual-encoder networks that independently process shape parameters and spatiotemporal coordinates before fusing them to enforce incompressible Navier–Stokes physics.
- Quantitative results show up to 60% error reduction on unseen geometries, demonstrating robust generalization and efficiency in handling PDE-constrained problems.
The Geometry-Parameterized Dual-Encoder Physics-Informed Neural Network (GP-DE-PINN) is a neural-operator framework designed for rapid, mesh-free prediction of physical fields in domains with parameterized geometric complexity. By explicitly separating the geometric parameterization from local physical encoding, GP-DE-PINN provides an expressive surrogate model for unsteady flow (and related PDE-constrained problems) on families of shapes, enabling accurate reconstruction, pressure inference, and robust generalization capabilities. This architecture is an overview of recent advances in dual-encoder PINNs, geometry-aware neural networks, and neural-operator training on transformed domains (Wang et al., 10 Jan 2026, &&&1&&&, Nguyen et al., 2024).
1. Architectural Foundation
GP-DE-PINN implements a dual-encoder scheme in which geometric parameters and spatiotemporal coordinates are independently mapped to high-dimensional latent codes and subsequently fused before decoding into field predictions. Formally, the geometric encoder receives the shape parameter vector (e.g., sampled boundary radii) and outputs a latent code . The spatiotemporal encoder processes the physical coordinate , generating . Concatenating these vectors yields , which the manifold decoder maps to predicted fields .
The architectural breakdown:
| Encoder | Input Type | Output |
|---|---|---|
| Geometry () | Shape parameters | Latent |
| Spatiotemporal () | Coordinates | Latent |
| Decoder () | Fused code | Field output |
This architecture enables disentanglement of geometric and physical latent spaces, avoiding the direct concatenation pitfalls of earlier geometry-aware PINNs (Wang et al., 10 Jan 2026, Nguyen et al., 2024).
2. Governing Equations and Physics-Informed Loss
GP-DE-PINN enforces the dimensionless incompressible Navier–Stokes equations:
Boundary conditions, including no-slip on obstacles, uniform inlet, and zero-traction outflow, are imposed. Loss function components consist of:
- : aggregate PDE residual over collocation points,
- : boundary condition penalty across sampled locations,
- : data mismatch over empirical or benchmarked samples.
Total physics-informed loss:
with equal weights ().
By strictly enforcing these constraints, GP-DE-PINN is capable of inferring pressure fields without explicit pressure supervision (Wang et al., 10 Jan 2026).
3. Geometry Parameterization
Geometries are encoded as low-dimensional parameter vectors, for instance, petal-shaped cylinder boundaries defined by inner radius , petal count , and outer radius . The boundary is constructed by B-spline interpolation and rotational duplication, yielding a sampling vector:
where is the azimuthal interval. This parametric representation is generalizable to point-cloud, binary image, or principal component encodings as in GADEM and other geometry-aware neural operator frameworks (Nguyen et al., 2024). The encoder architecture can ingest raw parameters, principal component projections, or variationally encoded boundary representations.
4. Training and Implementation Protocol
GP-DE-PINN employs fully connected feed-forward networks for all encoders and the decoder:
- : 4 layers, 250 neurons each, activation,
- : 3 layers, 50 neurons each, activation,
- : 5 layers, 100 neurons each, activation with linear output,
- Weights initialized by Xavier normal, biases set to zero.
Sampling protocol:
- collocation points per geometry across 40 training shapes, totaling 80,000,
- Boundary sets: 5,000 points on each cylinder surface, inlet, and initial time,
- 80,000 velocity samples for empirical loss.
Optimization utilizes full-batch L-BFGS over 50,000 iterations, with fixed loss weights. This regimen facilitates convergence of both geometric and physical latents, robustly enforcing physics constraints (Wang et al., 10 Jan 2026, Burbulla, 2023).
5. Quantitative Evaluation, Generalization, and Sensitivity
Empirical assessment demonstrates GP-DE-PINN's superiority relative to direct-concatenation geometry-aware PINNs:
- On training cases: GP-DE-PINN achieves –RMSE –$0.009$, –RMSE –$0.008$ (vs. $0.017$ in GP-PINN); mean relative error (MRE) drops from to .
- For unseen geometries ( mm, –$8$), –MRE improves from – (GP-PINN) to – (GP-DE-PINN), –MRE from – down to –.
- GP-DE-PINN sharply reconstructs velocity fields and Kármán vortex streets, retains pressure gradient accuracy, and matches high-variance physical lobes in standard deviation maps.
- Pressure field is correctly inferred, despite the absence of direct pressure data in training.
Sensitivity analyses show:
| Hyperparameter | Range | Impact on MRE | Robustness |
|---|---|---|---|
| Geometric sampling | – | MRE stable , sharp rise at | Plateau then degrade |
| Encoder width | $200$, $250$, $500$ | Best at $250$ (, MRE) | U-shaped error trend |
Generalization analysis confirms consistent error reduction over GP-PINN for both and on excluded test geometries (Wang et al., 10 Jan 2026, Nguyen et al., 2024).
6. Connections to Dual-Encoder PINNs and Geometry-Aware Frameworks
GP-DE-PINN is consistent with recent neural-operator and geometry-transformation PINN doctrines. In (Burbulla, 2023), the authors introduce a diffeomorphism that transfers geometric complexity into a reference domain, enabling standard PINN training while allowing geometric parameter variation and optimization. This construction naturally extends to a dual-encoder architecture where a geometry encoder extracts latent codes (possibly via MLP, PCA, or VAE), and a physics encoder processes local coordinates.
Similarly, in (Nguyen et al., 2024), geometry encoders map boundary point-clouds, parameters, or images to latent vectors, which are injected into energy-minimizing neural architectures. These frameworks demonstrate that parametric and latent geometry injection provide systematic generalization to unseen shapes, supporting rapid evaluation and physics-consistent predictions across a design space.
A plausible implication is that GP-DE-PINN architectures, equipped with flexible geometry encoders and physics-constrained loss functions, may be extended to a range of PDE-governed phenomena where shape variability drives solution diversity.
7. Significance, Limitations, and Outlook
GP-DE-PINN establishes a mesh-free, modular paradigm for high-fidelity flow field reconstruction, pressure estimation, and neural operator generalization over families of parameterized obstacles. The dual-encoder structure is robust against moderate changes in geometric sampling and encoder width. Direct applications span unsteady fluid mechanics, solid mechanics (with weak form extension), and multi-field physics on alternating domains.
Limitations noted in the data include:
- Sensitivity to extreme geometric sparsity (),
- Slight accuracy degradation for over/under-parameterized geometry encoders.
The method is poised for integration with shape optimization, operator learning, and non-local geometric effects, as previously sketched in diffeomorphism PINNs and geometry-aware deep energy methods (Burbulla, 2023, Nguyen et al., 2024). The separation of geometric and physical encoding, together with strict physics constraints, marks GP-DE-PINN as a robust computational surrogate for parametric engineering and scientific modeling.