Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
86 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
53 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Parallel cross-validation: a scalable fitting method for Gaussian process models (1912.13132v1)

Published 31 Dec 2019 in stat.CO

Abstract: Gaussian process (GP) models are widely used to analyze spatially referenced data and to predict values at locations without observations. In contrast to many algorithmic procedures, GP models are based on a statistical framework, which enables uncertainty quantification of the model structure and predictions. Both the evaluation of the likelihood and the prediction involve solving linear systems. Hence, the computational costs are large and limit the amount of data that can be handled. While there are many approximation strategies that lower the computational cost of GP models, they often provide only sub-optimal support for the parallel computing capabilities of current (high-performance) computing environments. We aim at bridging this gap with a parameter estimation and prediction method that is designed to be parallelizable. More precisely, we divide the spatial domain into overlapping subsets and use cross-validation (CV) to estimate the covariance parameters in parallel. We present simulation studies, which assess the accuracy of the parameter estimates and predictions. Moreover, we show that our implementation has good weak and strong parallel scaling properties. For illustration, we fit an exponential covariance model to a scientifically relevant canopy height dataset with 5 million observations. Using 512 processor cores in parallel brings the evaluation time of one covariance parameter configuration to less than 1.5 minutes. The parallel CV method can be easily extended to include approximate likelihood methods, multivariate and spatio-temporal data, as well as non-stationary covariance models.

Summary

We haven't generated a summary for this paper yet.