Gradient-enhanced deep Gaussian processes for multifidelity modelling (2402.16059v1)
Abstract: Multifidelity models integrate data from multiple sources to produce a single approximator for the underlying process. Dense low-fidelity samples are used to reduce interpolation error, while sparse high-fidelity samples are used to compensate for bias or noise in the low-fidelity samples. Deep Gaussian processes (GPs) are attractive for multifidelity modelling as they are non-parametric, robust to overfitting, perform well for small datasets, and, critically, can capture nonlinear and input-dependent relationships between data of different fidelities. Many datasets naturally contain gradient data, especially when they are generated by computational models that are compatible with automatic differentiation or have adjoint solutions. Principally, this work extends deep GPs to incorporate gradient data. We demonstrate this method on an analytical test problem and a realistic partial differential equation problem, where we predict the aerodynamic coefficients of a hypersonic flight vehicle over a range of flight conditions and geometries. In both examples, the gradient-enhanced deep GP outperforms a gradient-enhanced linear GP model and their non-gradient-enhanced counterparts.
- Kernels for vector-valued functions: A review. Foundations and Trends® in Machine Learning, 4(3):195–266, 2012.
- Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research, 18:1–43, 2018.
- Gradient-enhanced kriging for high-dimensional problems. Engineering with Computers, 35(1):157–173, 2019.
- Multivariable variogram and its application to the linear model of coregionalization. Mathematical Geology, 23:899–928, 1991.
- Overview of gaussian process based multi-fidelity techniques with variable relationship between fidelities, application to aerospace systems. Aerospace Science and Technology, 107:106339, 2020.
- Deep gaussian processes for multi-fidelity modeling. arXiv preprint arXiv:1903.07320, 2019.
- Deep gaussian processes. In Artificial intelligence and statistics, pages 207–215. PMLR, 2013.
- Gaussian processes for data-efficient learning in robotics and control. IEEE transactions on pattern analysis and machine intelligence, 37(2):408–423, 2013.
- Gaussian process regression for materials and molecules. Chemical Reviews, 121(16):10073–10141, 2021.
- Multi-fidelity optimization via surrogate modelling. Proceedings of the royal society a: mathematical, physical and engineering sciences, 463(2088):3251–3269, 2007.
- Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration. Advances in neural information processing systems, 31, 2018.
- Spatial statistics and gaussian processes: A beautiful marriage. Spatial Statistics, 18:86–104, 2016.
- Pierre Goovaerts. Ordinary cokriging revisited. Mathematical Geology, 30:21–42, 1998.
- Improving variable-fidelity surrogate modeling via gradient-enhanced kriging and a generalized hybrid bridge function. Aerospace Science and technology, 25(1):177–189, 2013.
- Bayesian optimization using deep gaussian processes with applications to aerospace system design. Optimization and Engineering, 22:321–361, 2021.
- Universal cokriging under intrinsic coregionalization. Mathematical Geology, 26:205–226, 1994.
- Antony Jameson. Aerodynamic design via control theory. Journal of scientific computing, 3:233–260, 1988.
- Deep sigma point processes. In Conference on Uncertainty in Artificial Intelligence, pages 789–798. PMLR, 2020a.
- Parametric gaussian process regressors. In International Conference on Machine Learning, pages 4702–4712. PMLR, 2020b.
- Predicting the output from a complex computer code when fast approximations are available. Biometrika, 87(1):1–13, 2000.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Recursive co-kriging model for design of computer experiments with multiple levels of fidelity. International Journal for Uncertainty Quantification, 4(5), 2014.
- Cope with diverse data structures in multi-fidelity modeling: a gaussian process method. Engineering Applications of Artificial Intelligence, 67:211–225, 2018.
- Trent William Lukaczyk. Surrogate modeling and active subspaces for efficient optimization of supersonic aircraft. Stanford University, 2015.
- A comparison of the continuous and discrete adjoint approach to automatic aerodynamic optimization. In 38th Aerospace sciences meeting and exhibit, page 667, 2000.
- Adjoint-based adaptive mesh refinement for complex geometries. In 46th AIAA Aerospace Sciences Meeting and Exhibit, page 725, 2008.
- Scaling gaussian processes with derivative information using variational inference. Advances in Neural Information Processing Systems, 34:6442–6453, 2021.
- Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
- Survey of multifidelity methods in uncertainty propagation, inference, and optimization. Siam Review, 60(3):550–591, 2018.
- Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2198):20160751, 2017.
- Bayesian geophysical inversion with trans-dimensional gaussian process machine learning. Geophysical Journal International, 217(3):1706–1726, 2019.
- Multifidelity cokriging for high-dimensional output functions with application to hypersonic airloads computation. AIAA Journal, 56(8):3060–3070, 2018.
- Doubly stochastic variational inference for deep gaussian processes. Advances in neural information processing systems, 30, 2017.
- Bayesian gaussian process latent variable model. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 844–851. JMLR Workshop and Conference Proceedings, 2010.
- An introduction to computational fluid dynamics: the finite volume method. Pearson education, 2007.
- Gaussian processes for machine learning, volume 2. MIT press Cambridge, MA, 2006.
- Gaussian process volatility model. Advances in neural information processing systems, 27, 2014.
- Viv Bone (2 papers)
- Chris van der Heide (11 papers)
- Kieran Mackle (1 paper)
- Ingo H. J. Jahn (1 paper)
- Peter M. Dower (19 papers)
- Chris Manzie (50 papers)