Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Power of Centralization in Distributed Processing (1203.5026v1)

Published 22 Mar 2012 in cs.DC, cs.NI, cs.PF, and math.PR

Abstract: In this thesis, we propose and analyze a multi-server model that captures a performance trade-off between centralized and distributed processing. In our model, a fraction $p$ of an available resource is deployed in a centralized manner (e.g., to serve a most-loaded station) while the remaining fraction $1-p$ is allocated to local servers that can only serve requests addressed specifically to their respective stations. Using a fluid model approach, we demonstrate a surprising phase transition in the steady-state delay, as $p$ changes: in the limit of a large number of stations, and when any amount of centralization is available ($p>0$), the average queue length in steady state scales as $\log_{1/(1-p)} 1/(1-\lambda)$ when the traffic intensity $\lambda$ goes to 1. This is exponentially smaller than the usual M/M/1-queue delay scaling of $1/(1-\lambda)$, obtained when all resources are fully allocated to local stations ($p=0$). This indicates a strong qualitative impact of even a small degree of centralization. We prove convergence to a fluid limit, and characterize both the transient and steady-state behavior of the finite system, in the limit as the number of stations $N$ goes to infinity. We show that the sequence of queue-length processes converges to a unique fluid trajectory (over any finite time interval, as $N$ approaches infinity, and that this fluid trajectory converges to a unique invariant state $vI$, for which a simple closed-form expression is obtained. We also show that the steady-state distribution of the $N$-server system concentrates on $vI$ as $N$ goes to infinity.

Citations (4)

Summary

We haven't generated a summary for this paper yet.