Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 96 tok/s
Gemini 3.0 Pro 48 tok/s Pro
Gemini 2.5 Flash 155 tok/s Pro
Kimi K2 197 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Interface Info-Aware Neural Operator (IANO)

Updated 16 November 2025
  • The paper introduces IANO, a framework that integrates interface data as a physical prior into neural operators for accurately resolving multiphase PDEs.
  • It employs two key modules—interface-aware encoding and geometry-aware positional encoding—to capture high-frequency variations and sharp discontinuities.
  • Numerical results show up to 33% RMSE reduction and enhanced noise resilience, demonstrating practical improvements in multiphase flow simulations.

The Interface Information-Aware Neural Operator (IANO) is a neural operator framework designed to address key computational challenges in modeling interface-driven partial differential equations (PDEs), particularly for multiphase flow systems. Multiphase flows are typified by complex dynamics, sharp field discontinuities at phase interfaces, and strong interphase coupling phenomena, which conventional neural operators and numerical solvers often fail to resolve with high accuracy or efficiency. IANO leverages explicit interface information as a physical prior, enabling robust and high-resolution operator learning even in regimes characterized by strong spatial heterogeneity and limited data.

1. Mathematical and Physical Context

In multiphase and interface-driven problems, the governing equations typically involve coupled PDEs (momentum, energy, continuity) on a spatial domain Ω\Omega partitioned by one or more interfaces Γ\Gamma. Interfaces are loci of discontinuities and high-frequency variations in physical fields—such as density, temperature, or velocity—with associated source or jump conditions modeling mass and heat transfer. Accurately capturing the effects at these interfaces requires models capable of representing sharp spatial gradients and discontinuities, which pose challenges for both mesh-based solvers and traditional end-to-end neural operator methods.

Neural operators, including DeepONet and Fourier Neural Operator (FNO), learn mappings of the form

G:{h1(),,hM(),θ}u(),\mathcal{G}: \{h_1(\cdot), \dots, h_M(\cdot), \theta\} \longmapsto u(\cdot),

where hih_i are physical fields and θ\theta are system parameters. However, these architectures demonstrate spectral bias, making them inefficient at recovering high-frequency variations, especially in data-sparse regimes and near interfaces.

IANO addresses these limitations by integrating interface geometry and information directly into the learning process, thus enhancing spectral expressiveness at the interfaces and robustness with respect to measurement noise and data scarcity.

2. Architectural Components of IANO

IANO incorporates two principal modules that interact to enable interface-aware operator learning:

2.1 Interface-Aware Multiple Function Encoding

This module encodes the relations among physical fields and the interface itself. Inputs include:

  • Physical fields hi(x,t)h_i(x, t) for i=1,,Mi=1, \dots, M
  • Interface indicator I(x,t){0,1}I(x, t) \in \{0, 1\} or a level-set embedding γ(x,t)\gamma(x, t)
  • System parameters θRp\theta \in \mathbb{R}^p

Initial embeddings are produced using backbone networks: Hi=fFNO(hi)RD,HI=fU-Net(I)RD,HP=fMLP(θ)RD\mathbf{H}_i = f_{\rm FNO}(h_i) \in \mathbb{R}^D,\quad \mathbf{H}_I = f_{\rm U\text{-}Net}(I) \in \mathbb{R}^D,\quad \mathbf{H}_P = f_{\rm MLP}(\theta) \in \mathbb{R}^D and normalized via L2L^2 normalization.

A cross-attention mechanism then fuses these embeddings: (q,k,v)=(WqH~,WkH~,WvH~)Rd,(q, k, v) = (W^q\tilde{H}, W^k\tilde{H}, W^v\tilde{H}) \in \mathbb{R}^d, with scaled dot-product attention weights: αij=exp(qi,kj/(qikj))r=1Mexp(qi,kr/(qikr))+exp(qi,kI/(qikI))\alpha_{i \to j} = \frac{\exp\left( {\langle q_i, k_j \rangle}/({\|q_i\|\|k_j\|}) \right)} {\sum_{r=1}^M \exp( {\langle q_i, k_r \rangle}/({\|q_i\|\|k_r\|}) ) + \exp( {\langle q_i, k_I\rangle}/({\|q_i\|\|k_I\|}) )} producing the fused embedding: Hi=j=1Mαijvjvj+αiIvIvI+H~i\mathbf{H}_i' = \sum_{j=1}^M \alpha_{i \to j} \frac{v_j}{\|v_j\|} + \alpha_{i \to I} \frac{v_I}{\|v_I\|} + \tilde{\mathbf{H}}_i This structure ensures that both inter-field and interface coupling, especially the high-frequency behavior at Γ\Gamma, are directly encoded into the latent space.

2.2 Geometry-Aware Positional Encoding

This module establishes a pointwise relationship among the spatial position xx, interface geometry γ(x)\gamma(x), and the latent embeddings: qx=[sin(2πx/L), cos(2πx/L), γ(x)]WxRdq_x = [\sin(2\pi x/L),\ \cos(2\pi x/L),\ \gamma(x)]\, W_x \in \mathbb{R}^d A cross-attention layer aligns these geometric encodings with the previously computed latent embeddings: Z=i{1:M,I,P}αivi,\mathbf{Z} = \sum_{i \in \{1:M, I, P\}} \alpha_i v_i, with αi=exp(qxki)/jexp(qxkj)\alpha_i = \exp(q_x \cdot k_i) / \sum_j \exp(q_x \cdot k_j). Subsequent self-attention refines Z\mathbf{Z}, yielding Z\mathbf{Z}' as the geometry-aware positional embedding.

The outputs {Hi}\{\mathbf{H}_i'\} and Z\mathbf{Z}' are concatenated and decoded (typically by an FNO stack) to generate the step-forward prediction {h^i()}\{\hat{h}_i(\cdot)\}.

3. Training Protocol and Optimization

IANO is trained to minimize the mean squared error (MSE) across all fields and spatial points: L=1Mi=1M1ΩxΩ(h^i(x)hi(x))2dx\mathcal{L} = \frac{1}{M} \sum_{i=1}^M \frac{1}{|\Omega|} \int_{x \in \Omega} (\hat{h}_i(x) - h_i^*(x))^2 dx No explicit interface penalty is required, as interface fidelity is built into the encoders.

Typical hyperparameters include a latent dimension d=64d=64, 8 attention heads, 4-layer cross-attention depth, and GELU activations. Optimization utilizes Adam with a learning rate of 1×1031 \times 10^{-3}. Fields and interface maps are preprocessed by min–max normalization and resampled to a common computational grid.

Data for training is synthesized from high-fidelity numerical solvers (e.g., Flash-X level-set solver) for multiple multiphase boiling scenarios, with interface labels I(x)I(x) derived from thresholding level-set embeddings.

4. Numerical and Empirical Results

Quantitative evaluations compare IANO to established operator architectures such as U-Net, MIONet, GNOT, and CODA-NO on five multiphase scenarios, using both overall RMSE on Ω\Omega and interface-restricted RMSE (IRMSE) on Γ\Gamma.

Key results include:

  • For subcooled pool boiling (temperature TT): U-Net RMSE = 0.035, IRMSE = 0.129; IANO RMSE = 0.030 (14.3% lower), IRMSE = 0.118 (8.5% lower).
  • For single bubble (temperature TT): GNOT RMSE = 0.009, IRMSE = 0.031; IANO RMSE = 0.006 (33% lower), IRMSE = 0.021 (32% lower).
  • Across all five benchmark scenarios and both velocity channels, IANO achieves on average ∼10% lower RMSE, with the greatest improvements at the interfaces.

Super-resolution capability is demonstrated by training on low-resolution data and testing at 2×2\times or 4×4\times upsampling; for temperature, 4×4\times upscaling reduces U-Net RMSE from 0.060 to 0.031. Velocity channels show similar RMSE reductions of approximately 40%.

Robustness to input noise is demonstrated by adding 1%, 3%, or 5% Gaussian noise to both fields and interfaces. IANO's RMSE increases only modestly (from 0.364 to 0.383 at 5% noise), while baseline errors degrade by 15–25%.

Ablation studies show removing interface encoding or geometry-aware modules increases RMSE by 10–30% and IRMSE even more, demonstrating the necessity of both components for IANO's performance.

5. Comparison with Other Interface-Operator Frameworks

An alternative approach to interface-aware operator learning is the Interfaced Operator Network (IONet) (Wu et al., 2023), which partitions the spatial domain into subdomains and trains branch/trunk subnetworks specific to each subdomain. Branch nets encode inputs in each region, while trunk nets yield spatially dependent features, and interface discontinuities are preserved by construction through summation with region-specific indicator functions.

IANO differs notably from IONet in its explicit integration of interface data via attention-based modules and geometry-aware encoding, rather than domain decomposition. Furthermore, while IONet employs a physics-informed loss to enforce PDE and jump conditions at collocation points—including the interface—IANO achieves interface fidelity via architectural priors without requiring such terms in the loss function. Empirical comparisons (although not within the same paper) suggest that both frameworks outperform vanilla operator networks in resolving interface phenomena, but IANO's super-resolution and noise-robustness properties are direct consequences of its encoding strategies.

6. Extensions, Limitations, and Future Directions

IANO’s explicit integration of interface information allows robust operator learning in challenging regimes—limited data, high noise, and severe spectral complexity. Geometry-aware positional encoding confers the ability to generate pointwise super-resolution predictions without retraining, supporting predictions on arbitrary mesh densities.

Current limitations include the reliance on accurate interface labels, which are not always available a priori. Extensions could involve hybridization with level-set neural fields for simultaneously inferring interface geometry or enforcing physics-informed constraints at Γ\Gamma for situations lacking direct interface data. The architecture is straightforward to generalize to other interface-driven PDEs, such as fluid–structure interaction or moving fronts, by adapting the geometry-extraction module.

Incorporating further physical loss terms—such as divergence-free or jump-condition penalties—could enhance physical fidelity in extrapolative or data-sparse regimes.

A plausible implication is that the IANO motif (explicit geometric priors in operator learning) will influence future neural operator architectures targeting systems with sharp spatial structure or where auxiliary geometric data is experimentally accessible.

7. Significance and Outlook

By leveraging interface geometry encoded in both field-function and positional embeddings, IANO offers a robust framework for learning neural operators capable of resolving sharp features and discontinuities in interface-dominated PDEs. Its demonstrated performance gains—average RMSE reduction of approximately 10%, pronounced improvements at interfaces, stable super-resolution, and resilience to substantial synthetic noise—position IANO as a front-running approach for real-world multiphase flow simulations and other problems demanding discontinuity-aware operator learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Interface Information-Aware Neural Operator (IANO).