Interface Info-Aware Neural Operator (IANO)
- The paper introduces IANO, a framework that integrates interface data as a physical prior into neural operators for accurately resolving multiphase PDEs.
- It employs two key modules—interface-aware encoding and geometry-aware positional encoding—to capture high-frequency variations and sharp discontinuities.
- Numerical results show up to 33% RMSE reduction and enhanced noise resilience, demonstrating practical improvements in multiphase flow simulations.
The Interface Information-Aware Neural Operator (IANO) is a neural operator framework designed to address key computational challenges in modeling interface-driven partial differential equations (PDEs), particularly for multiphase flow systems. Multiphase flows are typified by complex dynamics, sharp field discontinuities at phase interfaces, and strong interphase coupling phenomena, which conventional neural operators and numerical solvers often fail to resolve with high accuracy or efficiency. IANO leverages explicit interface information as a physical prior, enabling robust and high-resolution operator learning even in regimes characterized by strong spatial heterogeneity and limited data.
1. Mathematical and Physical Context
In multiphase and interface-driven problems, the governing equations typically involve coupled PDEs (momentum, energy, continuity) on a spatial domain partitioned by one or more interfaces . Interfaces are loci of discontinuities and high-frequency variations in physical fields—such as density, temperature, or velocity—with associated source or jump conditions modeling mass and heat transfer. Accurately capturing the effects at these interfaces requires models capable of representing sharp spatial gradients and discontinuities, which pose challenges for both mesh-based solvers and traditional end-to-end neural operator methods.
Neural operators, including DeepONet and Fourier Neural Operator (FNO), learn mappings of the form
where are physical fields and are system parameters. However, these architectures demonstrate spectral bias, making them inefficient at recovering high-frequency variations, especially in data-sparse regimes and near interfaces.
IANO addresses these limitations by integrating interface geometry and information directly into the learning process, thus enhancing spectral expressiveness at the interfaces and robustness with respect to measurement noise and data scarcity.
2. Architectural Components of IANO
IANO incorporates two principal modules that interact to enable interface-aware operator learning:
2.1 Interface-Aware Multiple Function Encoding
This module encodes the relations among physical fields and the interface itself. Inputs include:
- Physical fields for
- Interface indicator or a level-set embedding
- System parameters
Initial embeddings are produced using backbone networks: and normalized via normalization.
A cross-attention mechanism then fuses these embeddings: with scaled dot-product attention weights: producing the fused embedding: This structure ensures that both inter-field and interface coupling, especially the high-frequency behavior at , are directly encoded into the latent space.
2.2 Geometry-Aware Positional Encoding
This module establishes a pointwise relationship among the spatial position , interface geometry , and the latent embeddings: A cross-attention layer aligns these geometric encodings with the previously computed latent embeddings: with . Subsequent self-attention refines , yielding as the geometry-aware positional embedding.
The outputs and are concatenated and decoded (typically by an FNO stack) to generate the step-forward prediction .
3. Training Protocol and Optimization
IANO is trained to minimize the mean squared error (MSE) across all fields and spatial points: No explicit interface penalty is required, as interface fidelity is built into the encoders.
Typical hyperparameters include a latent dimension , 8 attention heads, 4-layer cross-attention depth, and GELU activations. Optimization utilizes Adam with a learning rate of . Fields and interface maps are preprocessed by min–max normalization and resampled to a common computational grid.
Data for training is synthesized from high-fidelity numerical solvers (e.g., Flash-X level-set solver) for multiple multiphase boiling scenarios, with interface labels derived from thresholding level-set embeddings.
4. Numerical and Empirical Results
Quantitative evaluations compare IANO to established operator architectures such as U-Net, MIONet, GNOT, and CODA-NO on five multiphase scenarios, using both overall RMSE on and interface-restricted RMSE (IRMSE) on .
Key results include:
- For subcooled pool boiling (temperature ): U-Net RMSE = 0.035, IRMSE = 0.129; IANO RMSE = 0.030 (14.3% lower), IRMSE = 0.118 (8.5% lower).
- For single bubble (temperature ): GNOT RMSE = 0.009, IRMSE = 0.031; IANO RMSE = 0.006 (33% lower), IRMSE = 0.021 (32% lower).
- Across all five benchmark scenarios and both velocity channels, IANO achieves on average ∼10% lower RMSE, with the greatest improvements at the interfaces.
Super-resolution capability is demonstrated by training on low-resolution data and testing at or upsampling; for temperature, upscaling reduces U-Net RMSE from 0.060 to 0.031. Velocity channels show similar RMSE reductions of approximately 40%.
Robustness to input noise is demonstrated by adding 1%, 3%, or 5% Gaussian noise to both fields and interfaces. IANO's RMSE increases only modestly (from 0.364 to 0.383 at 5% noise), while baseline errors degrade by 15–25%.
Ablation studies show removing interface encoding or geometry-aware modules increases RMSE by 10–30% and IRMSE even more, demonstrating the necessity of both components for IANO's performance.
5. Comparison with Other Interface-Operator Frameworks
An alternative approach to interface-aware operator learning is the Interfaced Operator Network (IONet) (Wu et al., 2023), which partitions the spatial domain into subdomains and trains branch/trunk subnetworks specific to each subdomain. Branch nets encode inputs in each region, while trunk nets yield spatially dependent features, and interface discontinuities are preserved by construction through summation with region-specific indicator functions.
IANO differs notably from IONet in its explicit integration of interface data via attention-based modules and geometry-aware encoding, rather than domain decomposition. Furthermore, while IONet employs a physics-informed loss to enforce PDE and jump conditions at collocation points—including the interface—IANO achieves interface fidelity via architectural priors without requiring such terms in the loss function. Empirical comparisons (although not within the same paper) suggest that both frameworks outperform vanilla operator networks in resolving interface phenomena, but IANO's super-resolution and noise-robustness properties are direct consequences of its encoding strategies.
6. Extensions, Limitations, and Future Directions
IANO’s explicit integration of interface information allows robust operator learning in challenging regimes—limited data, high noise, and severe spectral complexity. Geometry-aware positional encoding confers the ability to generate pointwise super-resolution predictions without retraining, supporting predictions on arbitrary mesh densities.
Current limitations include the reliance on accurate interface labels, which are not always available a priori. Extensions could involve hybridization with level-set neural fields for simultaneously inferring interface geometry or enforcing physics-informed constraints at for situations lacking direct interface data. The architecture is straightforward to generalize to other interface-driven PDEs, such as fluid–structure interaction or moving fronts, by adapting the geometry-extraction module.
Incorporating further physical loss terms—such as divergence-free or jump-condition penalties—could enhance physical fidelity in extrapolative or data-sparse regimes.
A plausible implication is that the IANO motif (explicit geometric priors in operator learning) will influence future neural operator architectures targeting systems with sharp spatial structure or where auxiliary geometric data is experimentally accessible.
7. Significance and Outlook
By leveraging interface geometry encoded in both field-function and positional embeddings, IANO offers a robust framework for learning neural operators capable of resolving sharp features and discontinuities in interface-dominated PDEs. Its demonstrated performance gains—average RMSE reduction of approximately 10%, pronounced improvements at interfaces, stable super-resolution, and resilience to substantial synthetic noise—position IANO as a front-running approach for real-world multiphase flow simulations and other problems demanding discontinuity-aware operator learning.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free