Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Randomized Composable Core-sets for Distributed Submodular Maximization (1506.06715v1)

Published 22 Jun 2015 in cs.DS and cs.DC

Abstract: An effective technique for solving optimization problems over massive data sets is to partition the data into smaller pieces, solve the problem on each piece and compute a representative solution from it, and finally obtain a solution inside the union of the representative solutions for all pieces. This technique can be captured via the concept of {\em composable core-sets}, and has been recently applied to solve diversity maximization problems as well as several clustering problems. However, for coverage and submodular maximization problems, impossibility bounds are known for this technique \cite{IMMM14}. In this paper, we focus on efficient construction of a randomized variant of composable core-sets where the above idea is applied on a {\em random clustering} of the data. We employ this technique for the coverage, monotone and non-monotone submodular maximization problems. Our results significantly improve upon the hardness results for non-randomized core-sets, and imply improved results for submodular maximization in a distributed and streaming settings. In summary, we show that a simple greedy algorithm results in a $1/3$-approximate randomized composable core-set for submodular maximization under a cardinality constraint. This is in contrast to a known $O({\log k\over \sqrt{k}})$ impossibility result for (non-randomized) composable core-set. Our result also extends to non-monotone submodular functions, and leads to the first 2-round MapReduce-based constant-factor approximation algorithm with $O(n)$ total communication complexity for either monotone or non-monotone functions. Finally, using an improved analysis technique and a new algorithm $\mathsf{PseudoGreedy}$, we present an improved $0.545$-approximation algorithm for monotone submodular maximization, which is in turn the first MapReduce-based algorithm beating factor $1/2$ in a constant number of rounds.

Citations (127)

Summary

We haven't generated a summary for this paper yet.