Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Computing with Channel Noise (1612.05943v2)

Published 18 Dec 2016 in cs.CR, cs.IT, and math.IT

Abstract: A group of $n$ users want to run a distributed protocol $\pi$ over a network where communication occurs via private point-to-point channels. Unfortunately, an adversary, who knows $\pi$, is able to maliciously flip bits on the channels. Can we efficiently simulate $\pi$ in the presence of such an adversary? We show that this is possible, even when $L$, the number of bits sent in $\pi$, and $T$, the number of bits flipped by the adversary are not known in advance. In particular, we show how to create a robust version of $\pi$ that 1) fails with probability at most $\delta$, for any $\delta>0$; and 2) sends $\tilde{O}(L + T)$ bits, where the $\tilde{O}$ notation hides a $\log (nL/ \delta)$ term multiplying $L$. Additionally, we show how to improve this result when the average message size $\alpha$ is not constant. In particular, we give an algorithm that sends $O( L (1 + (1/\alpha) \log (n L/\delta) + T)$ bits. This algorithm is adaptive in that it does not require a priori knowledge of $\alpha$. We note that if $\alpha$ is $\Omega\left( \log (n L/\delta) \right)$, then this improved algorithm sends only $O(L+T)$ bits, and is therefore within a constant factor of optimal.

Citations (7)

Summary

We haven't generated a summary for this paper yet.