Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
89 tokens/sec
Gemini 2.5 Pro Premium
50 tokens/sec
GPT-5 Medium
29 tokens/sec
GPT-5 High Premium
28 tokens/sec
GPT-4o
90 tokens/sec
DeepSeek R1 via Azure Premium
55 tokens/sec
GPT OSS 120B via Groq Premium
468 tokens/sec
Kimi K2 via Groq Premium
207 tokens/sec
2000 character limit reached

Parameter-Embedded Fourier Neural Operator

Updated 13 August 2025
  • The paper demonstrates that PE-FNO explicitly embeds scalar, vector, or field parameters into Fourier layers, tailoring the operator to diverse physical conditions.
  • It supports rapid surrogate modeling for parametric PDEs, inverse problems, and zero-shot super-resolution across varying grid resolutions.
  • The approach achieves robust generalization with sub-percent errors in applications like battery simulation and seismic imaging while maintaining mesh invariance.

The Parameter-Embedded Fourier Neural Operator (PE-FNO) is a neural operator architecture that extends the Fourier Neural Operator's capacity to learn parametric maps between infinite-dimensional function spaces, explicitly conditioning the operator on external physical, structural, or experimental parameters. While the “vanilla” FNO achieves mesh invariance and operator generalization by parameterizing nonlocal integral kernels directly in the truncated Fourier domain, the PE-FNO augments this by ensuring that the learned solution operator explicitly incorporates exogenous parameter dependencies—such as material properties, geometry, control signals, or spatial heterogeneity. This paradigm supports rapid parametric simulations, accelerated inverse problems, and robust transfer across operating or physical regimes.

1. Architectural Principles and Parameter Embedding

The PE-FNO retains the core spectral operator design of the original FNO, where the kernel is learned in the Fourier domain and the forward-pass through an FNO layer is given by: vt+1(x)=σ(Wvt(x)+F1[RϕF(vt)](x)),v_{t+1}(x) = \sigma \left( W v_t(x) + \mathcal{F}^{-1} \left[ R_\phi \cdot \mathcal{F}(v_t) \right](x) \right), with F\mathcal{F}, F1\mathcal{F}^{-1} the forward/inverse FFT transforms, RϕR_\phi the tensor of learned weights on low-frequency modes, and WW a local channel-mixing operator (Li et al., 2020).

The key innovation of the PE-FNO is the explicit embedding of one or more parameters p\mathbf{p} (possibly scalars, vectors, or spatially varying fields) into the Fourier operator. Several embedding strategies have emerged:

  • Channel-wise Embedding: Parameters are projected (using MLPs or linear layers) into modulation vectors or latent codes, which are used for channel- or feature-wise scaling, bias, or gating in each Fourier layer (Panahi et al., 11 Aug 2025).
  • Concatenation: Parameters are concatenated to the input as additional constant or spatially repeated channels before the lifting/network body (Panahi et al., 11 Aug 2025, Guan et al., 2021).
  • Kernel Modulation: Spectral kernels RϕR_\phi themselves may be made explicit functions of p\mathbf{p}, so each instance induces a distinct integral transform on the input (Panahi et al., 11 Aug 2025, Zhao et al., 2023).
  • Embedding via Input Coordinates: For spatially varying parameters (e.g., velocity fields, permeability maps), new input channels encoding these functions are included directly, and the network learns their functional role (Li et al., 2022).

A canonical PE-FNO update is: vt+1(x)=σ(W(p)vt(x)+F1[Rϕ(p)F(vt)](x)),v_{t+1}(x) = \sigma \left( W(\mathbf{p}) v_t(x) + \mathcal{F}^{-1} [ R_\phi(\mathbf{p}) \cdot \mathcal{F}(v_t) ](x) \right), where (p)(\mathbf{p}) denotes explicit parameter-dependent modulation.

2. Applications and Benefits

a. Surrogate Modeling for Parametric Simulations

PE-FNOs provide surrogates for parameterized PDEs where physical properties, environmental conditions, or inputs (e.g., battery particle radius, diffusivity (Panahi et al., 11 Aug 2025); turbulent Reynolds number (Atif et al., 23 Sep 2024); coefficients in Darcy/Helmholtz/Navier–Stokes (Li et al., 2020, Li et al., 2022)) must be varied at runtime. Once trained, the model generalizes rapidly across these external conditions.

b. Inverse Problems and Uncertainty Quantification

In simulation-based inference on function spaces (SBI), the PE-FNO allows joint inference of function-valued parameters and vector-valued hyperparameters, yielding flexible posteriors that adapt to arbitrary configurations of the problem space (Moss et al., 28 May 2025).

c. Resolution Invariance and Super-Resolution

Since parameter embedding interacts with the inherent mesh invariance of the FNO, the PE-FNO yields operators that generalize across both new parameter configurations and new discretizations—supporting “zero-shot” super-resolution on unobserved grids (Li et al., 2020, Kabri et al., 2023).

d. Real-time and Embedded Digital Twins

PE-FNOs deliver latency reductions of up to 200× over fine-grid PDE solvers while preserving sub-percent errors (e.g., concentration/voltage in lithium-ion batteries (Panahi et al., 11 Aug 2025)). The ability to switch parameters at runtime is critical for control, optimization, and embedded inference.

3. Implementation Details

Embedding Strategy Example Use Effects on Operator
Channel-wise modulation Battery radius, D Per-layer spectral features modulated by parameter-dependent scaling
Concatenation Spatial field (e.g. v(x)) Whole parameter field included as new input channel
Kernel modulation PDE parameter p Spectral kernel RϕR_\phi is learned as explicit function of p
Coordinates as features Arbitrary geometry Sample locations and parametric locations provided to network
  • Auxiliary Embedding MLPs (seen in (Panahi et al., 11 Aug 2025)): Parameters are preprocessed via an MLP that outputs a set of modulation weights, which then gate or rescale spectral/convolutional features at each layer.
  • Hybrid Architectures: Local spatial features—often missed by global Fourier convolution—can be captured via CNN preprocessors, with parameter embeddings concatenated; the resulting “Conv-FNO” or “UNet-FNO” maintains both spatial locality and global resolution invariance (Liu et al., 22 Mar 2025).
  • Distributed and Parallel Architectures: For problems with very high parameter space dimensionality (e.g., 2.6B variables for CO₂ sequestration (II et al., 2022)), PE-FNOs can be trained and run in a model-parallel fashion by domain decomposition, with parameter information embedded independently across subdomains.

4. Performance, Generalization, and Limitations

Mesh and Parameter Generalization:

  • PE-FNOs deployed for battery simulation (SPM) maintained normalized L2L_2 errors below 1% for concentration and voltage mean absolute errors under 1.7 mV across diverse load types and parametric sweeps, despite embedded parameter modulation (Panahi et al., 11 Aug 2025).
  • In seismic imaging, parameter channels (e.g., variable velocity models, frequency) provide robust generalization to unseen datasets and smooth out-of-distribution models (Li et al., 2022).
  • For function-valued parameter inference, FNOPE achieves posterior predictive MSE nearly identical (within 10410^{-4}) to true posteriors at a fraction of simulation cost (Moss et al., 28 May 2025).

Trade-offs:

  • Embedding parameters can marginally increase model error (e.g., $0.5918$ percentage points in battery diffusivity inverse recovery (Panahi et al., 11 Aug 2025)), but this is offset by substantial gains in flexibility and parameter sweep capability.
  • Overembedding or improperly regularized parameter encodings risk reducing generalization performance; group norm and Rademacher complexity analysis shows that increasing kernel or mode counts can elevate generalization gap unless effective norms are minimized (Kim et al., 2022).

Resolution Invariance:

  • Advanced resizing/interpolation schemes allow PE-FNOs to operate across arbitrary input/output grid resolutions without substantial loss of accuracy by decoupling the spatial representation from the Fourier kernel parameters (Kabri et al., 2023, Liu et al., 22 Mar 2025).

5. Emerging Extensions

a. Structured and Physics-Encoded PE-FNOs

New approaches embed physical constraints (e.g., divergence-free stress via a curl potential (Khorrami et al., 27 Aug 2024)) directly in the network head rather than via soft loss penalties, combining parameter embedding with strict satisfaction of conservation or symmetry.

b. Hybrid Spectral Techniques

Directions include lossless mappings in spectral neural operators (SNOs) (Fanaskov et al., 2022), where parameter embeddings control the coefficients of a parametrized spectral expansion, offering theoretically transparent operator mappings and elimination of aliasing.

c. Multiscale and Residual Learning

Hierarchical, residual, and multiscale PE-FNOs (e.g., convolution-residual, attention, and kernel modulation layers (Zhao et al., 2023, Qin et al., 10 Apr 2024, Liu et al., 22 Mar 2025)) enable tuning the operator’s frequency response, adapting kernel truncation and parameter embeddings to local solution or coefficient oscillations.

6. Practical Guidance and Future Directions

  • Architectural Choice: When the parameter space is small and global, channel-wise MLP embeddings or direct kernel modulation are effective; there is negligible runtime cost increase and easy software modularity. For larger or spatially varying parameters, direct concatenation or inclusion as input channels is appropriate.
  • Regularization & Normalization: Regularize the kernel and embedding MLPs to avoid degradation of the network's generalization gap, following group-norm-based capacity control (Kim et al., 2022).
  • Data Augmentation: For SBI and arbitrary grid inference, augment training with positional noise and masking to boost transfer to unseen discretizations (Moss et al., 28 May 2025).
  • Physics-Informed/Encoded Operators: Combine parameter embedding with strict physical encoding whenever hard constraints (divergence, boundary conditions) are essential (Khorrami et al., 27 Aug 2024).
  • Scalability: Employ distributed/parallel architectures for high-dimensional parametric problems, embedding parameter fields locally in each decomposition shard (II et al., 2022).

7. Summary Table: Comparison of FNO and PE-FNO

Property FNO PE-FNO
Parametric input Not explicit Explicit scalar/vector/field parameter embedding
Generalization Across discretization Across discretization and physical/experimental parameter
Operator update Fixed spectral kernel Kernel modulated/conditioned by parameter embedding
Use-cases Model learning for fixed PDE Multi-parameter simulation, SBI, digital twins, real-time control
Example applications Turbulence, Darcy flow Li-ion battery twins, seismic imaging, CO₂ storage, functional SBI

The Parameter-Embedded Fourier Neural Operator formalizes a blueprint for operator learning that is fast, mesh-invariant, and robust across a continuum of parameter regimes, supporting a spectrum of scientific and engineering applications where flexible, high-fidelity, and real-time parametric simulation is required.