Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Gaussian Process Variational Autoencoders (2010.10177v2)

Published 20 Oct 2020 in stat.ML, cs.LG, and cs.NE

Abstract: Large, multi-dimensional spatio-temporal datasets are omnipresent in modern science and engineering. An effective framework for handling such data are Gaussian process deep generative models (GP-DGMs), which employ GP priors over the latent variables of DGMs. Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on inducing points, which are essential for the computational efficiency of GPs, nor do they handle missing data -- a natural occurrence in many spatio-temporal datasets -- in a principled manner. We address these shortcomings with the development of the sparse Gaussian process variational autoencoder (SGP-VAE), characterised by the use of partial inference networks for parameterising sparse GP approximations. Leveraging the benefits of amortised variational inference, the SGP-VAE enables inference in multi-output sparse GPs on previously unobserved data with no additional training. The SGP-VAE is evaluated in a variety of experiments where it outperforms alternative approaches including multi-output GPs and structured VAEs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Matthew Ashman (14 papers)
  2. Jonathan So (6 papers)
  3. Will Tebbutt (8 papers)
  4. Vincent Fortuin (52 papers)
  5. Michael Pearce (13 papers)
  6. Richard E. Turner (112 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.