Papers
Topics
Authors
Recent
Search
2000 character limit reached

You Only Accept Samples Once: Fast, Self-Correcting Stochastic Variational Inference

Published 5 Jun 2024 in stat.ML, cs.LG, and stat.CO | (2406.02838v1)

Abstract: We introduce YOASOVI, an algorithm for performing fast, self-correcting stochastic optimization for Variational Inference (VI) on large Bayesian heirarchical models. To accomplish this, we take advantage of available information on the objective function used for stochastic VI at each iteration and replace regular Monte Carlo sampling with acceptance sampling. Rather than spend computational resources drawing and evaluating over a large sample for the gradient, we draw only one sample and accept it with probability proportional to the expected improvement in the objective. The following paper develops two versions of the algorithm: the first one based on a naive intuition, and another building up the algorithm as a Metropolis-type scheme. Empirical results based on simulations and benchmark datasets for multivariate Gaussian mixture models show that YOASOVI consistently converges faster (in clock time) and within better optimal neighborhoods than both regularized Monte Carlo and Quasi-Monte Carlo VI algorithms.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.