Dice Question Streamline Icon: https://streamlinehq.com

Deriving covariance functions for non-FNO neural operator architectures

Derive operator-valued covariance functions for neural operator architectures beyond the Fourier Neural Operator, including the Graph Neural Operator, in order to characterize the infinite-width Gaussian process limits of these architectures and enable kernel-based operator learning analogous to the Fourier Neural Operator case.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper establishes that infinite-width neural operators are Gaussian processes and provides explicit covariance function computations for two parametrizations: the Fourier Neural Operator (FNO) and a toroidal Matérn operator. These results enable posterior inference in regression scenarios and shed light on the inductive biases of FNOs.

While the work focuses on FNOs, the broader family of neural operators includes other architectures such as the Graph Neural Operator. Extending the covariance function derivations to these architectures would generalize the theoretical framework and facilitate analogous kernel-based methods for operator learning.

References

Moreover, while we focused on the ubiquitous FNO architecture, deriving covariance functions for other architectures, such as the graph neural operator \citep{Kovachki2023}, remains an open direction.

Infinite Neural Operators: Gaussian processes on functions (2510.16675 - Souza et al., 19 Oct 2025) in Section 6 (Discussion)