Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parallel algorithms for maximizing one-sided $σ$-smooth function (2206.05841v1)

Published 12 Jun 2022 in math.OC

Abstract: In this paper, we study the problem of maximizing a monotone normalized one-sided $\sigma$-smooth ($OSS$ for short) function $F(x)$, subject to a convex polytope. This problem was first introduced by Mehrdad et al. \cite{GSS2021} to characterize the multilinear extension of some set functions. Different with the serial algorithm with name Jump-Start Continuous Greedy Algorithm by Mehrdad et al. \cite{GSS2021}, we propose Jump-Start Parallel Greedy (JSPG for short) algorithm, the first parallel algorithm, for this problem. The approximation ratio of JSPG algorithm is proved to be $((1-e{-\left(\frac{\alpha}{\alpha+1}\right){2\sigma}}) \epsilon)$ for any any number $\alpha\in(0,1]$ and $\epsilon>0$. We also prove that our JSPG algorithm runs in $(O(\log n/\epsilon{2}))$ adaptive rounds and consumes $O(n \log n/\epsilon{2})$ queries. In addition, we study the stochastic version of maximizing monotone normalized $OSS$ function, in which the objective function $F(x)$ is defined as $F(x)=\mathbb{E}{y\sim T}f(x,y)$. Here $f$ is a stochastic function with respect to the random variable $Y$, and $y$ is the realization of $Y$ drawn from a probability distribution $T$. For this stochastic version, we design Stochastic Parallel-Greedy (SPG) algorithm, which achieves a result of $F(x)\geq(1 -e{-\left(\frac{\alpha}{\alpha+1}\right){2\sigma}}-\epsilon)OPT-O(\kappa{1/2})$, with the same time complexity of JSPG algorithm. Here $\kappa=\frac{\max {5|\nabla F(x{0})-d_{o}|{2}, 16\sigma{2}+2L{2}D{2}}}{(t+9){2/3}}$ is related to the preset parameters $\sigma, L, D$ and time $t$.

Summary

We haven't generated a summary for this paper yet.