Papers
Topics
Authors
Recent
2000 character limit reached

Fundamental Limits of Stochastic Shared Caches Networks (2005.13847v3)

Published 28 May 2020 in cs.IT and math.IT

Abstract: The work establishes the exact performance limits of stochastic coded caching when users share a bounded number of cache states, and when the association between users and caches, is random. Under the premise that more balanced user-to-cache associations perform better than unbalanced ones, our work provides a statistical analysis of the average performance of such networks, identifying in closed form, the exact optimal average delivery time. To insightfully capture this delay, we derive easy to compute closed-form analytical bounds that prove tight in the limit of a large number $\Lambda$ of cache states. In the scenario where delivery involves $K$ users, we conclude that the multiplicative performance deterioration due to randomness -- as compared to the well-known deterministic uniform case -- can be unbounded and can scale as $\Theta\left( \frac{\log \Lambda}{\log \log \Lambda} \right)$ at $K=\Theta\left(\Lambda\right)$, and that this scaling vanishes when $K=\Omega\left(\Lambda\log \Lambda\right)$. To alleviate this adverse effect of cache-load imbalance, we consider various load balancing methods, and show that employing proximity-bounded load balancing with an ability to choose from $h$ neighboring caches, the aforementioned scaling reduces to $\Theta \left(\frac{\log(\Lambda / h)}{ \log \log(\Lambda / h)} \right)$, while when the proximity constraint is removed, the scaling is of a much slower order $\Theta \left( \log \log \Lambda \right)$. The above analysis is extensively validated numerically.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Video Overview

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.