Ergodic convergence of a stochastic proximal point algorithm (1504.05400v2)
Abstract: The purpose of this paper is to establish the almost sure weak ergodic convergence of a sequence of iterates $(x_n)$ given by $x_{n+1} = (I+\lambda_n A(\xi_{n+1},\,.\,)){-1}(x_n)$ where $(A(s,\,.\,):s\in E)$ is a collection of maximal monotone operators on a separable Hilbert space, $(\xi_n)$ is an independent identically distributed sequence of random variables on $E$ and $(\lambda_n)$ is a positive sequence in $\ell2\backslash \ell1$. The weighted averaged sequence of iterates is shown to converge weakly to a zero (assumed to exist) of the Aumann expectation ${\mathbb E}(A(\xi_1,\,.\,))$ under the assumption that the latter is maximal. We consider applications to stochastic optimization problems of the form $\min {\mathbb E}(f(\xi_1,x))$ w.r.t. $x\in \bigcap_{i=1}m X_i$ where $f$ is a normal convex integrand and $(X_i)$ is a collection of closed convex sets. In this case, the iterations are closely related to a stochastic proximal algorithm recently proposed by Wang and Bertsekas.