Stochastic Continuous Submodular Maximization: Boosting via Non-oblivious Function (2201.00703v3)
Abstract: In this paper, we revisit Stochastic Continuous Submodular Maximization in both offline and online settings, which can benefit wide applications in machine learning and operations research areas. We present a boosting framework covering gradient ascent and online gradient ascent. The fundamental ingredient of our methods is a novel non-oblivious function $F$ derived from a factor-revealing optimization problem, whose any stationary point provides a $(1-e{-\gamma})$-approximation to the global maximum of the $\gamma$-weakly DR-submodular objective function $f\in C{1,1}_L(\mathcal{X})$. Under the offline scenario, we propose a boosting gradient ascent method achieving $(1-e{-\gamma}-\epsilon{2})$-approximation after $O(1/\epsilon2)$ iterations, which improves the $(\frac{\gamma2}{1+\gamma2})$ approximation ratio of the classical gradient ascent algorithm. In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function $F$. Meanwhile, we verify that this boosting online algorithm achieves a regret of $O(\sqrt{D})$ against a $(1-e{-\gamma})$-approximation to the best feasible solution in hindsight, where $D$ is the sum of delays of gradient feedback. To the best of our knowledge, this is the first result to obtain $O(\sqrt{T})$ regret against a $(1-e{-\gamma})$-approximation with $O(1)$ gradient inquiry at each time step, when no delay exists, i.e., $D=T$. Finally, numerical experiments demonstrate the effectiveness of our boosting methods.
- Qixin Zhang (13 papers)
- Zengde Deng (18 papers)
- Zaiyi Chen (14 papers)
- Haoyuan Hu (25 papers)
- Yu Yang (213 papers)