Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilevel Quasi-Monte Carlo for Optimization under Uncertainty (2109.14367v1)

Published 29 Sep 2021 in math.NA, cs.NA, and math.OC

Abstract: This paper considers the problem of optimizing the average tracking error for an elliptic partial differential equation with an uncertain lognormal diffusion coefficient. In particular, the application of the multilevel quasi-Monte Carlo (MLQMC) method to the estimation of the gradient is investigated, with a circulant embedding method used to sample the stochastic field. A novel regularity analysis of the adjoint variable is essential for the MLQMC estimation of the gradient in combination with the samples generated using the CE method. A rigorous cost and error analysis shows that a randomly shifted quasi-Monte Carlo method leads to a faster rate of decay in the root mean square error of the gradient than the ordinary Monte Carlo method, while considering multiple levels substantially reduces the computational effort. Numerical experiments confirm the improved rate of convergence and show that the MLQMC method outperforms the multilevel Monte Carlo method and the single level quasi-Monte Carlo method.

Citations (6)

Summary

We haven't generated a summary for this paper yet.