Papers
Topics
Authors
Recent
Search
2000 character limit reached

Information Spreading in Dynamic Networks under Oblivious Adversaries

Published 19 Jul 2016 in cs.DC | (1607.05645v1)

Abstract: We study the problem of gossip in dynamic networks controlled by an adversary that can modify the network arbitrarily from one round to another, provided that the network is always connected. In the gossip problem, $n$ tokens are arbitrarily distributed among the $n$ network nodes, and the goal is to disseminate all the $n$ tokens to every node. Our focus is on token-forwarding algorithms, which do not manipulate tokens in any way other than storing, copying, and forwarding them. Gossip can be completed in linear time in any static network, but a basic open question for dynamic networks is the existence of a distributed protocol that can do significantly better than an easily achievable bound of $O(n2)$ rounds. In previous work, it has been shown that under adaptive adversaries, every token forwarding algorithm requires $\Omega(n2/\log n)$ rounds. In this paper, we study oblivious adversaries, which differ from adaptive adversaries in one crucial aspect--- they are oblivious to random choices made by the protocol. We present an $\tilde{\Omega}(n{3/2})$ lower bound under an oblivious adversary for RANDDIFF, a natural algorithm in which neighbors exchange a token chosen uniformly at random from the difference of their token sets. We also present an $\tilde{\Omega}(n{4/3})$ bound under a stronger notion of oblivious adversary for symmetric knowledge-based algorithms. On the positive side, we present a centralized algorithm that completes gossip in $\tilde{O}(n{3/2})$ rounds. We also show an $\tilde{O}(n{5/3})$ upper bound for RANDDIFF in a restricted class of oblivious adversaries, which we call paths-respecting.

Citations (13)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.