Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

General multilevel adaptations for stochastic approximation algorithms (1506.05482v2)

Published 17 Jun 2015 in math.PR

Abstract: In this article we present and analyse new multilevel adaptations of stochastic approximation algorithms for the computation of a zero of a function $f\colon D \to \mathbb Rd$ defined on a convex domain $D\subset \mathbb Rd$, which is given as a parameterised family of expectations. Our approach is universal in the sense that having multilevel implementations for a particular application at hand it is straightforward to implement the corresponding stochastic approximation algorithm. Moreover, previous research on multilevel Monte Carlo can be incorporated in a natural way. This is due to the fact that the analysis of the error and the computational cost of our method is based on similar assumptions as used in Giles (2008) for the computation of a single expectation. Additionally, we essentially only require that $f$ satisfies a classical contraction property from stochastic approximation theory. Under these assumptions we establish error bounds in $p$-th mean for our multilevel Robbins-Monro and Polyak-Ruppert schemes that decay in the computational time as fast as the classical error bounds for multilevel Monte Carlo approximations of single expectations known from Giles (2008).

Summary

We haven't generated a summary for this paper yet.