Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Expression Rates of Neural Operators for Linear Elliptic PDEs in Polytopes (2409.17552v2)

Published 26 Sep 2024 in math.NA and cs.NA

Abstract: We study the approximation rates of a class of deep neural network approximations of operators, which arise as data-to-solution maps $\mathcal{G}\dagger$ of linear elliptic partial differential equations (PDEs), and act between pairs $X,Y$ of suitable infinite-dimensional spaces. We prove expression rate bounds for approximate neural operators $\mathcal{G}$ with the structure $\mathcal{G} = \mathcal{R} \circ \mathcal{A} \circ \mathcal{E}$, with linear encoders $\mathcal{E}$ and decoders $\mathcal{R}$. The constructive proofs are via a recurrent NN structure obtained by unrolling exponentially convergent, self-consistent (``Richardson'') iterations. We bound the operator approximation error with respect to the linear Kolmogorov $N$-widths of the data and solution sets and in terms of the size of the approximation network. We prove expression rate bounds for approximate, neural solution operators emulating the coefficient-to-solution maps for elliptic PDEs set in $d$-dimensional polytopes, with $d\in{2,3}$, and subject to Dirichlet-, Neumann- or mixed boundary conditions. Exploiting weighted norm characterizations of the solution sets of elliptic PDEs in polytopes, we show algebraic rates of expression for problems with data with finite regularity, and exponential operator expression rates for analytic data.

Summary

We haven't generated a summary for this paper yet.