Learning smooth functions in high dimensions: from sparse polynomials to deep neural networks (2404.03761v1)
Abstract: Learning approximations to smooth target functions of many variables from finite sets of pointwise samples is an important task in scientific computing and its many applications in computational science and engineering. Despite well over half a century of research on high-dimensional approximation, this remains a challenging problem. Yet, significant advances have been made in the last decade towards efficient methods for doing this, commencing with so-called sparse polynomial approximation methods and continuing most recently with methods based on Deep Neural Networks (DNNs). In tandem, there have been substantial advances in the relevant approximation theory and analysis of these techniques. In this work, we survey this recent progress. We describe the contemporary motivations for this problem, which stem from parametric models and computational uncertainty quantification; the relevant function classes, namely, classes of infinite-dimensional, Banach-valued, holomorphic functions; fundamental limits of learnability from finite data for these classes; and finally, sparse polynomial and DNN methods for efficiently learning such functions from finite data. For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning. Aiming to narrow this gap, we develop the topic of practical existence theory, which asserts the existence of dimension-independent DNN architectures and training strategies that achieve provably near-optimal generalization errors in terms of the amount of training data.
- A. Abedeljawad and P. Grohs. Sampling complexity of deep approximation spaces. arXiv:2312.1337, 2023.
- B. Adcock. Infinite-dimensional ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT minimization and function approximation from pointwise data. Constr. Approx., 45(3):343–390, 2017.
- Correcting for unknown errors in sparse high-dimensional function approximation. Numer. Math., 142(3):667–711, 2019.
- B. Adcock and S. Brugiapaglia. Monte Carlo is a good sampling strategy for polynomial approximation in high dimensions. arXiv:2208.09045, 2023.
- Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited data. In J. Bruna, J. S. Hesthaven, and L. Zdeborová, editors, Proceedings of The Second Annual Conference on Mathematical and Scientific Machine Learning, volume 145 of Proc. Mach. Learn. Res. (PMLR), pages 1–36. PMLR, 2021.
- Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks. arXiv:2211.12633, 2023.
- On efficient algorithms for computing near-best polynomial approximations to high-dimensional, Hilbert-valued functions from limited samples. Mem. Eur. Math. Soc. (In press), 2024.
- Sparse Polynomial Approximation of High-Dimensional Functions. Comput. Sci. Eng. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2022.
- Restarts subject to approximate sharpness: a parameter-free and optimal scheme for first-order methods. arXiv:2301.02268, 2023.
- B. Adcock and N. Dexter. The gap between theory and practice in function approximation with deep neural networks. SIAM J. Math. Data Sci., 3(2):624–655, 2021.
- Optimal approximation of infinite-dimensional holomorphic functions II: recovery from i.i.d. pointwise samples. arXiv:2310.16940, 2023.
- Optimal approximation of infinite-dimensional holomorphic functions. Calcolo, 61:12, 2024.
- B. Adcock and A. C. Hansen. Compressive Imaging: Structure, Sampling, Learning. Cambridge University Press, Cambridge, UK, 2021.
- K. Ajavon. Surrogate models for diffusion on graphs: a high-dimensional polynomial approach. Master’s thesis, Concordia University, 2024.
- Am (A)I hallucinating? Non-robustness, hallucinations and unpredictable performance of AI for MR image reconstruction. Preprint, 2023.
- On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Natl. Acad. Sci. USA, 117(48):30088–30095, 2020.
- A stochastic collocation method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal., 43(3):1005–1034, 2007.
- Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: a numerical comparison. In J. S. Hesthaven and E. M. Rønquist, editors, Spectral and High Order Methods for Partial Differential Equations, volume 76 of Lect. Notes Comput. Sci. Eng., pages 43–62, Berlin, Heidelberg, Germany, 2011. Springer.
- Full error analysis for the training of deep neural networks. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 25(2):2150020, 2022.
- Convergence of quasi-optimal stochastic Galerkin methods for a class of PDEs with random coefficients. Comput. Math. Appl., 67(4):732–751, 2014.
- On the optimal polynomial approximation of stochastic PDEs by Galerkin and collocation methods. Math. Models Methods Appl. Sci., 22(9):1250023, 2012.
- Learning the random variables in Monte Carlo simulations with stochastic gradient descent: Machine learning for parametric PDEs and financial derivative pricing. Math. Finance, 34(1):90–150, 2023.
- Stochastic finite element: a non intrusive approach by regression. Eur. J. Comput. Mech., 15(1-3):81–92, 2006.
- Model reduction and neural networks for parametric PDEs. J. Comput. Math., 7:121–157, 2021.
- Sparse tensor discretization of elliptic SPDEs. SIAM J. Sci. Comput., 31(6):4281–4304, 2010.
- M. Blanchard and M. A. Bennouna. The representation power of neural networks: breaking the curse of dimensionality. arXiv:2012.05451, 2020.
- G. Blatman and B. Sudret. Adaptive sparse polynomial chaos expansion based on least angle regression. J. Comput. Phys., 230:2345–2367, 2011.
- Optimal approximation with sparsely connected deep neural networks. SIAM J. Math. Data Sci., 1(1):8–45, 2019.
- Polynomial approximation of anisotropic analytic functions of several variables. Constr. Approx., 53:319–348, 2021.
- Rational neural networks. In Advances in Neural Information Processing Systems, pages 14243–14253, 2020.
- N. Boullé and A. Townsend. A mathematical guide to operator learning. arXiv:2312.14688, 2023.
- Physics-informed deep learning and compressive collocation for high-dimensional diffusion-reaction equations: practical existence theory and numerics. Preprint, 2024.
- Sparse recovery in bounded Riesz systems with applications to numerical methods for PDEs. Appl. Comput. Harmon. Anal., 53:231–269, 2021.
- Analytic regularity and collocation approximation for elliptic PDEs with random domain deformations. Comput. Math. Appl., 71(6):1173–1197, 2016.
- A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision, 40(1):120–145, 2011.
- A. Chambolle and T. Pock. On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program., 159(1-2):253–287, 2016.
- Nonparametric regression on low-dimensional manifolds using deep ReLU networks: function approximation and statistical recovery. Inf. Inference, 11(4):1203–1253, 2022.
- Discrete least squares polynomial approximation with random evaluations - application to parametric and stochastic elliptic PDEs. ESAIM Math. Model. Numer. Anal., 49(3):815–837, 2015.
- High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs. Found. Comput. Math., 14(4):601–633, 2014.
- Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs. J. Math. Pures Appl., 103(2):400–428, 2015.
- Sparse harmonic transforms: a new class of sublinear-time algorithms for learning functions of many variables. Found. Comput. Math., 21(2):275–329, 2021.
- Sparse harmonic transforms II: best s𝑠sitalic_s-term approximation guarantees for bounded orthonormal product bases in sublinear-time. Numer. Math., 148(2):293–362, 2021.
- Deep-HyROMnet: A deep learning-based operator approximation for hyper-reduction of nonlinear parametrized PDEs. J. Sci. Comput., 93:57, 2022.
- On the stability and accuracy of least squares approximations. Found. Comput. Math., 13:819–834, 2013.
- A. Cohen and R. A. DeVore. Approximation of high-dimensional parametric PDEs. Acta Numer., 24:1–159, 2015.
- Convergence rates of best N𝑁Nitalic_N-term Galerkin approximations for a class of elliptic sPDEs. Found. Comput. Math., 10:615–646, 2010.
- Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDE’s. Anal. Appl. (Singap.), 9(1):11–47, 2011.
- Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension. Constr. Approx., 45:497–519, 2017.
- Shape holomorphy of the stationary Navier–Stokes equations. SIAM J. Math. Anal., 50(2):1720–1752, 2018.
- The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and smale’s 18th problem. Proc. Natl. Acad. Sci. USA, 119(12):e2107151119, 2022.
- G. Cybenko. Approximation by Superpositions of a Sigmoidal Function. Math. Control Signals Systems, 2(4):303–314, 1989.
- Robust training and initialization of deep neural networks: An adaptive basis viewpoint. In J. Lu and R. Ward, editors, Proceedings of The First Mathematical and Scientific Machine Learning Conference, volume 107 of Proceedings of Machine Learning Research, pages 512–536, Princeton University, Princeton, NJ, USA, 2020. PMLR.
- D. Dũng and V. K. Nguyen. Deep ReLU neural networks in high-dimensional approximation. Neural Netw., 142:619–635, 2021.
- Deep ReLU neural network approximation in Bochner spaces and applications to parametric PDEs. J. Complexity, 79:101779, 2023.
- Hyperbolic Cross Approximation. Adv. Courses Math. CRM Barcelona. Birkhäuser, Basel, Switzerland, 2018.
- F. Dai and V. Temlyakov. Universal sampling discretization. Constr. Approx., 58:589–613, 2023.
- Data driven approximation of parametrized PDEs by reduced basis and neural betworks. J. Comput. Phys., 416:109550, 2020.
- J. Daws and C. Webster. Analysis of deep neural networks with quasi-optimal polynomial approximation rates. arXiv:1912.02302, 2019.
- J. Daws and C. G. Webster. A Polynomial-Based Approach for Architectural Design and Learning with Deep Neural Networks. arXiv:1905.10457, 2019.
- On the approximation of functions by tanh neural networks. Neural Networks, 143:732–750, 2021.
- Neural network approximation. Acta Numer., 30:327–444, 2021.
- R. A. DeVore. Nonlinear approximation. Acta Numer., 7:51–150, 1998.
- A. Doostan and H. Owhadi. A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys., 230(8):3015–3034, 2011.
- The Barron space and the flow-induced function spaces for neural network models. Constr. Approx., 55:369–406, 2021.
- W. E and Q. Wang. Exponential convergence of the deep neural network approximation for analytic functions. Sci. China Math., 61(10):1733–1740, 2018.
- Deep neural network approximation theory. IEEE Trans. Inform. Theory, 67(6):2581–2623, 2021.
- Stochastic collocation with kernel density estimation. Comput. Methods Appl. Mech. Engrg., 245-246:36–46, 2012.
- S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Appl. Numer. Harmon. Anal. Birkhäuser, New York, NY, 2013.
- N. R. Franco and S. Brugiapaglia. A practical existence theorem for reduced order models based on convolutional autoencoders. arXiv:2402.00435, 2024.
- J. Frankle and M. Carbin. The lottery ticket hypothesis: finding sparse, trainable neural networks. In ICLR, 2019.
- B. Ganapathysubramanian and N. Zabaras. Sparse grid collocation schemes for stochastic natural convection problems. J. Comput. Phys., 225(1):652–685, 2007.
- Numerical solution of the parametric diffusion equation by deep neural networks. J. Sci. Comput., 88:22, 2021.
- Handbook of Uncertainty Quantification. Springer, Switzerland, 2017.
- Stochastic Finite Elements: A Spectral Approach. Dover Publications, Inc., Mineola, NY, revised edition, 2003.
- P. Grohs and F. Voigtlaender. Proof of the theory-to-practice gap in deep learning via samplig complexity bounds for neural network approximation spaces. Found. Comput. Math. (in press), 2023.
- Error bounds for approximations with deep ReLU neural networks in Ws,psuperscript𝑊𝑠𝑝W^{s,p}italic_W start_POSTSUPERSCRIPT italic_s , italic_p end_POSTSUPERSCRIPT norms. Anal. Appl. (Singap.), 18(05):803–859, 2020.
- I. Gühring and M. Raslan. Approximation rates for neural networks with encodable weights in smoothness spaces. Neural Networks, 134:107–130, 2021.
- An adaptive wavelet stochastic collocation method for irregular solutions of partial differential equations with random input data. In J. Garcke and D. Pflüger, editors, Sparse Grids and Applications – Munich 2012, volume 97 of Lect. Notes Comput. Sci. Eng., pages 137–170. Springer, Cham, Switzerland, 2014.
- Stochastic finite element methods for partial differential equations with random input data. Acta Numer., 23:521–650, 2014.
- Constructing least-squares polynomial approximations. SIAM Rev., 62(2):483–508, 2020.
- M. Hadigol and A. Doostan. Least squares polynomial chaos expansion: a review of sampling strategies. Comput. Methods Appl. Mech. Engrg., 332:382–407, 2018.
- J. Hampton and A. Doostan. Compressive sampling methods for sparse polynomial chaos expansions. In R. Ghanem, D. Higdon, and H. Owhadi, editors, Handbook of Uncertainty Quantification, pages 827–855. Springer, Cham, Switzerland, 2017.
- M. Hansen and C. Schwab. Analytic regularity and nonlinear approximation of a class of parametric semilinear elliptic PDEs. Math. Nachr., 286(8-9):832–860, 2013.
- M. Hansen and C. Schwab. Sparse adaptive approximation of high dimensional parametric initial value problems. Vietnam J. Math., 41(2):181–215, 2013.
- A neural multilevel method for high-dimensional parametric PDEs. In Advances in Neural Information Processing Systems, 2021.
- Multilevel CNNs for parametric PDEs. J. Mach. Learn. Res., 24:1–42, 2023.
- Neural and spectral operator surrogates: unified construction and expression rate bounds. arXiv:2207.04950v1, 2022.
- V. H. Hoang and C. Schwab. Regularity and generalized polynomial chaos approximation of parametric and random second-order hyperbolic partial differential equations. Anal. Appl. (Singap.), 10(3):295–326, 2012.
- Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res., 23:1–124, 2021.
- Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366, 1989.
- Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids. J. Comput. Phys., 230(10):3977–3997, 2011.
- Sampling discretization and related problems. J. Complexity, 71:101653, 2022.
- NeuFENet: Neural finite element solutions with theoretical bounds for parametric PDEs. arXiv:2110.01601, 2021.
- Solving parametric PDE problems with artificial neural networks. European J. Appl. Math., 32(3):421–435, 2021.
- Neural operator: Learning maps between function spaces with applications to PDEs. J. Mach. Learn. Res., 24:1–97, 2023.
- Operator learning: algorithms and analysis. arXiv:2402.15715, 2024.
- Approximation of mixed order Sobolev functions on the d𝑑ditalic_d-torus: asymptotics, preasymptotics, and d𝑑ditalic_d-dependence. Constr. Approx., 42:353–398, 2015.
- A. Kunoth and C. Schwab. Analytic regularity and GPC approximation for control problems constrained by linear parametric elliptic and parabolic PDEs. SIAM J. Control Optim., 51(3):2442–2471, 2013.
- F. Laakmann and P. Petersen. Efficient approximation of solutions of parametric linear transport equations by ReLU DNNs. Adv. Comput. Math., 47(11), 2021.
- S. Lanthaler. Operator learning with PCA-Net: upper and lower complexity bounds. arXiv:2303.16317, 2023.
- O. Le Maître and O. M. Knio. Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics. Sci. Comput. Springer, Dordrecht, Netherlands, 2010.
- Solving parametric partial differential equations with deep rectified quadratic unit neural networks. J. Sci. Comput., 93:80, 2022.
- Better approximations of high dimensional smooth functions by deep neural networks with rectified power units. Commun. Comput. Phys., 27:379–411, 2020.
- Fourier neural operator for parametric partial differential equations. In ICLR, 2021.
- S. Liang and R. Srikant. Why deep neural networks for function approximation? In ICLR, 2017.
- De Rham compatible Deep Neural Network FEM. Neural Netw., 165:721–739, 2023.
- Deep network approximation for smooth functions. SIAM J. Math. Anal., 53(5):5465–5506, 2021.
- Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell., 3:218–229, 2021.
- X. Ma and N. Zabaras. An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations. J. Comput. Phys., 228(8):3084–3113, 2009.
- L. Mathelin and K. A. Gallivan. A compressed sensing approach for partial differential equations with random input data. Commun. Comput. Phys., 12(4):919–954, 2012.
- Stochastic approaches to uncertainty quantification in CFD simulations. Numer. Algorithms, 38(1-3):209–236, 2005.
- H. Mhaskar. Approximation properties of a multilayered feedforward artificial neural network. Adv. Comput. Math., 1:61–80, 1993.
- H. Mhaskar. Neural networks for optimal approximation of smooth and analytic functions. Neural Comput., 8(1):164–177, 1996.
- G. Migliorati. Polynomial approximation by means of the random discrete L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT projection and application to inverse problems for PDEs with stochastic data. PhD thesis, Politecnico di Milano, 2013.
- G. Migliorati. Adaptive polynomial approximation by means of random discrete least squares. In A. Abdulle, S. Deparis, D. Kressner, F. Nobile, and M. Picasso, editors, Numerical Mathematics and Advanced Applications – ENUMATH 2013, pages 547–554, Cham, Switzerland, 2015. Springer.
- G. Migliorati. Adaptive approximation by optimal weighted least squares methods. SIAM J. Numer. Anal, 57(5):2217–2245, 2019.
- Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points. J. Multivariate Anal., 142:167–182, 2015.
- Approximation of quantities of interest in stochastic PDEs by the random discrete L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT projection on polynomial spaces. SIAM J. Sci. Comput., 35(3):A1440–A1460, 2013.
- Analysis of the discrete L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT projection on polynomial spaces with random evaluations. Found. Comput. Math., 14:419–456, 2014.
- Algorithm unrolling: interpretable, efficient deep learning for signal and image processing. IEEE Signal Process. Mag., 38(2):18–44, 2021.
- H. Montanelli and Q. Du. New error bounds for deep ReLU networks using sparse grids. SIAM J. Math. Data Sci., 1(1):78–92, 2019.
- Deep ReLU networks overcome the curse of dimensionality for bandlimited functions. J. Comput. Math., 39(6):801–815, 2021.
- M. Neyra-Nesterenko and B. Adcock. NESTANets: stable, accurate and efficient neural networks for analysis-sparse inverse problems. Sampl. Theory Signal Process. Data Anal., 21:4, 2023.
- An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal., 46(5):2411–2442, 2008.
- A sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal., 46(5):2309–2345, 2008.
- E. Novak. Deterministic and Stochastic Error Bounds in Numerical Analysis. Number 1. Springer Berlin, Heidelberg, 1988.
- E. Novak and H. Woźniakowski. Tractability of Multivariate Problems, Volume I: Linear Information, volume 6. European Math. Soc. Publ. House, Zürich, 2008.
- E. Novak and H. Woźniakowski. Tractability of Multivariate Problems, Volume II: Standard Information for functionals, volume 12. European Math. Soc., Zürich, 2010.
- I. Ohn and Y. Kim. Smooth function approximation by deep neural networks with general activation functions. Entropy, 21(7):627, 2019.
- Deep ReLU networks and high-order finite element methods. Anal. Appl. (Singap.), 18(5):715–770, 2020.
- J. A. A. Opschoor and C. Schwab. Deep ReLU networks and high-order finite element methods II: Chebyshev emulation. arXiv:2310.07261, 2023.
- Exponential ReLU DNN expression of holomorphic maps in high dimension. Constr. Approx., 55:537–582, 2022.
- A weighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys., 267:92–111, 2014.
- P. Petersen and F. Voigtlaender. Optimal approximation of piecewise smooth functions using deep ReLU neural networks. Neural Networks, 108:296–330, 2018.
- A. Pinkus. Approximation theory of the MLP model in neural networks. Acta Numer., 8:143–195, 1999.
- Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. Int. J. Autom. Comput., 14:503–519, 2017.
- H. Rauhut and C. Schwab. Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations. Math. Comp., 86:661–700, 2017.
- H. Rauhut and R. Ward. Sparse Legendre expansions via ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-minimization. J. Approx. Theory, 164(5):517–533, 2012.
- H. Rauhut and R. Ward. Interpolation via weighted ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT minimization. Appl. Comput. Harmon. Anal., 40(2):321–351, 2016.
- V. Roulet and A. d’Aspremont. Sharpness, restart, and acceleration. SIAM J. Optim., 30(1):262–289, 2020.
- L. Scarabosio. Deep neural network surrogates for nonsmooth quantities of interest in shape uncertainty quantification. SIAM/ASA J. Uncertain. Quantif., 10(3):975–1011, 2022.
- J. Schmidt-Hieber. Nonparametric regression using deep neural networks with ReLU activation function. Ann. Statist., 48(4):1875–1897, 2020.
- Deep operator network approximation rates for Lipschitz operators. arXiv:2307.09835, 2023.
- C. Schwab and J. Zech. Deep learning in high dimension: neural network expression rates for generalized polynomial chaos expansions in UQ. Anal. Appl. (Singap.), 17(1):19–55, 2019.
- C. Schwab and J. Zech. Deep learning in high dimension: neural network expression rates for analytic functions in L2(ℝd,γd)superscript𝐿2superscriptℝ𝑑subscript𝛾𝑑L^{2}(\mathbb{R}^{d},\gamma_{d})italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT , italic_γ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ). SIAM/ASA J. Uncertain. Quantif., 11(1):199–234, 2023.
- B. Settles. Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Springer, Cham, Switzerland, 2012.
- Provable approximation properties for deep neural networks. Appl. Comput. Harmon. Anal., 44:537–557, 2018.
- R. C. Smith. Uncertainty Quantification: Theory, Implementation, and Applications. Comput. Sci. Eng. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2013.
- T. J. Sullivan. Introduction to Uncertainty Quantification, volume 63 of Texts Appl. Math. Springer, Cham, Switzerland, 2015.
- T. Suzuki. Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. In ICLR, 2019.
- ChebNet: efficient and stable constructions of deep neural networks with rectified power units via Chebyshev approximation. arXiv:1911.05467, 2019.
- M. Telgarsky. Neural networks and rational functions. In ICML, 2017.
- V. N. Temlyakov. The Marcinkiewicz-type discretization theorems. Constr. Approx., 48(2):337–369, 2018.
- R. A. Todor and C. Schwab. Convergence rates for sparse chaos approximations of elliptic problems with stochastic coefficients. IMA J. Numer. Anal., 27(2):232–261, 2007.
- Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients. Numer. Math., 137(2):451–493, 2017.
- Information-Based Complexity. Elsevier Science and Technology Books, 1988.
- L. N. Trefethen. Approximation Theory and Approximation Practice. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2013.
- M. Vidyasagar. An Introduction to Compressed Sensing. Comput. Sci. Eng. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2019.
- D. Xiu and J. S. Hesthaven. High-order collocation methods for differential equations with random inputs. SIAM J. Sci. Comput., 27(3):1118–1139, 2005.
- Stochastic collocation algorithms using ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-minimization. Int. J. Uncertain. Quantif., 2(3):279–293, 2012.
- X. Yang and G. E. Karniadakis. Reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization method for stochastic elliptic differential equations. J. Comput. Phys., 248:87–108, 2013.
- D. Yarotsky. Error bounds for approximations with deep ReLU networks. Neural Networks, 94:103–114, 2017.
- D. Yarotsky. Optimal approximation of continuous functions by very deep ReLU networks. In S. Bubeck, V. Perchet, and P. Rigollet, editors, Proceedings of the 31st Conference On Learning Theory, volume 75 of Proceedings of Machine Learning Research, pages 639–649. PMLR, 2018.
- Hyperspherical sparse approximation techniques for high-dimensional discontinuity detection. SIAM Rev., 58(3):517–551, 2016.
- Ben Adcock (74 papers)
- Simone Brugiapaglia (34 papers)
- Nick Dexter (20 papers)
- Sebastian Moraga (8 papers)