Quasi-Monte Carlo Methods: What, Why, and How? (2502.03644v1)
Abstract: Many questions in quantitative finance, uncertainty quantification, and other disciplines are answered by computing the population mean, $\mu := \mathbb{E}(Y)$, where instances of $Y:=f(\boldsymbol{X})$ may be generated by numerical simulation and $\boldsymbol{X}$ has a simple probability distribution. The population mean can be approximated by the sample mean, $\hat{\mu}n := n{-1} \sum{i=0}{n-1} f(\boldsymbol{x}i)$ for a well chosen sequence of nodes, ${\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots}$ and a sufficiently large sample size, $n$. Computing $\mu$ is equivalent to computing a $d$-dimensional integral, $\int f(\boldsymbol{x}) \varrho(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}$, where $\varrho$ is the probability density for $\boldsymbol{X}$. Quasi-Monte Carlo methods replace independent and identically distributed sequences of random vector nodes, ${\boldsymbol{x}_i }{i = 0}{\infty}$, by low discrepancy sequences. This accelerates the convergence of $\hat{\mu}_n$ to $\mu$ as $n \to \infty$. This tutorial describes low discrepancy sequences and their quality measures. We demonstrate the performance gains possible with quasi-Monte Carlo methods. Moreover, we describe how to formulate problems to realize the greatest performance gains using quasi-Monte Carlo. We also briefly describe the use of quasi-Monte Carlo methods for problems beyond computing the mean, $\mu$.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.