Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
112 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
39 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
5 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Fully Adaptive Self-Stabilizing Transformer for LCL Problems (2105.09756v3)

Published 20 May 2021 in cs.DC

Abstract: The first generic self-stabilizing transformer for local problems in a constrained bandwidth model is introduced. This transformer can be applied to a wide class of locally checkable labeling (LCL) problems, converting a given fault free synchronous algorithm that satisfies certain conditions into a self-stabilizing synchronous algorithm for the same problem. The resulting self-stabilizing algorithms are anonymous, size-uniform, and \emph{fully adaptive} in the sense that their time complexity is bounded as a function of the number $k$ of nodes that suffered faults (possibly at different times) since the last legal configuration. Specifically, for graphs whose degrees are up-bounded by $\Delta$, the algorithms produced by the transformer stabilize in time proportional to $\log (k + \Delta)$ in expectation, independently of the number of nodes in the graph. As such, the transformer is applicable also for infinite graphs (with degree bound $\Delta$). Another appealing feature of the transformer is its small message size overhead. The transformer is applied to known algorithms (or simple variants thereof) for some classic LCL problems, producing the first anonymous size-uniform self-stabilizing algorithms for these problems that are provably fully adaptive. From a technical point of view, the transformer's key design feature is a novel probabilistic tool that allows different nodes to act in synchrony even though their clocks may have been adversarially manipulated.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.