Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Hardness of Massively Parallel Computation (2008.06554v1)

Published 14 Aug 2020 in cs.DS

Abstract: We investigate whether there are inherent limits of parallelization in the (randomized) massively parallel computation (MPC) model by comparing it with the (sequential) RAM model. As our main result, we show the existence of hard functions that are essentially not parallelizable in the MPC model. Based on the widely-used random oracle methodology in cryptography with a cryptographic hash function $h:{0,1}n \rightarrow {0,1}n$ computable in time $t_h$, we show that there exists a function that can be computed in time $O(T\cdot t_h)$ and space $S$ by a RAM algorithm, but any MPC algorithm with local memory size $s < S/c$ for some $c>1$ requires at least $\tilde{\Omega}(T)$ rounds to compute the function, even in the average case, for a wide range of parameters $n \leq S \leq T \leq 2{n{1/4}}$. Our result is almost optimal in the sense that by taking $T$ to be much larger than $t_h$, \textit{e.g.}, $T$ to be sub-exponential in $t_h$, to compute the function, the round complexity of any MPC algorithm with small local memory size is asymptotically the same (up to a polylogarithmic factor) as the time complexity of the RAM algorithm. Our result is obtained by adapting the so-called compression argument from the data structure lower bounds and cryptography literature to the context of massively parallel computation.

Citations (3)

Summary

We haven't generated a summary for this paper yet.