Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Seeing Far vs. Seeing Wide: Volume Complexity of Local Graph Problems (1907.08160v2)

Published 18 Jul 2019 in cs.DC

Abstract: Consider a graph problem that is locally checkable but not locally solvable: given a solution we can check that it is feasible by verifying all constant-radius neighborhoods, but to find a solution each node needs to explore the input graph at least up to distance $\Omega(\log n)$ in order to produce its output. We consider the complexity of such problems from the perspective of volume: how large a subgraph does a node need to see in order to produce its output. We study locally checkable graph problems on bounded-degree graphs. We give a number of constructions that exhibit tradeoffs between deterministic distance, randomized distance, deterministic volume, and randomized volume: - If the deterministic distance is linear, it is also known that randomized distance is near-linear. In contrast, we show that there are problems with linear deterministic volume but only logarithmic randomized volume. - We prove a volume hierarchy theorem for randomized complexity: among problems with linear deterministic volume complexity, there are infinitely many distinct randomized volume complexity classes between $\Omega(\log n)$ and $O(n)$. This hierarchy persists even when restricting to problems whose randomized and deterministic distance complexities are $\Theta(\log n)$. - Similar hierarchies exist for polynomial distance complexities: for any $k, \ell \in N$ with $k \leq \ell$, there are problems whose randomized and deterministic distance complexities are $\Theta(n{1/\ell})$, randomized volume complexities are $\Theta(n{1/k})$, and whose deterministic volume complexities are $\Theta(n)$. Additionally, we consider connections between our volume model and massively parallel computation (MPC). We give a general simulation argument that any volume-efficient algorithm can be transformed into a space-efficient MPC algorithm.

Citations (14)

Summary

We haven't generated a summary for this paper yet.