Papers
Topics
Authors
Recent
2000 character limit reached

Expression Rates of Neural Operators for Linear Elliptic PDEs in Polytopes

Published 26 Sep 2024 in math.NA and cs.NA | (2409.17552v2)

Abstract: We study the approximation rates of a class of deep neural network approximations of operators, which arise as data-to-solution maps $\mathcal{G}\dagger$ of linear elliptic partial differential equations (PDEs), and act between pairs $X,Y$ of suitable infinite-dimensional spaces. We prove expression rate bounds for approximate neural operators $\mathcal{G}$ with the structure $\mathcal{G} = \mathcal{R} \circ \mathcal{A} \circ \mathcal{E}$, with linear encoders $\mathcal{E}$ and decoders $\mathcal{R}$. The constructive proofs are via a recurrent NN structure obtained by unrolling exponentially convergent, self-consistent (``Richardson'') iterations. We bound the operator approximation error with respect to the linear Kolmogorov $N$-widths of the data and solution sets and in terms of the size of the approximation network. We prove expression rate bounds for approximate, neural solution operators emulating the coefficient-to-solution maps for elliptic PDEs set in $d$-dimensional polytopes, with $d\in{2,3}$, and subject to Dirichlet-, Neumann- or mixed boundary conditions. Exploiting weighted norm characterizations of the solution sets of elliptic PDEs in polytopes, we show algebraic rates of expression for problems with data with finite regularity, and exponential operator expression rates for analytic data.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.