Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accurate Learning or Fast Mixing? Dynamic Adaptability of Caching Algorithms (1701.02214v6)

Published 9 Jan 2017 in cs.NI

Abstract: Typical analysis of content caching algorithms using the metric of steady state hit probability under a stationary request process does not account for performance loss under a variable request arrival process. In this work, we consider adaptability of caching algorithms from two perspectives: (a) the accuracy of learning a fixed popularity distribution, and (b) the speed of learning items' popularity. In order to attain this goal, we compute the distance between the stationary distributions of several popular algorithms with that of a genie-aided algorithm that has knowledge of the true popularity ranking, which we use as a measure of learning accuracy. We then characterize the mixing time of each algorithm, i.e., the time needed to attain the stationary distribution, which we use as a measure of learning efficiency. We merge both measures above to obtain the "learning error" representing both how quickly and how accurately an algorithm learns the optimal caching distribution, and use this to determine the trade-off between these two objectives of many popular caching algorithms. Informed by the results of our analysis, we propose a novel hybrid algorithm, Adaptive-LRU (A-LRU), that learns both faster and better the changes in the popularity. We show numerically that it also outperforms all other candidate algorithms when confronted with either a dynamically changing synthetic request process or using real world traces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jian Li (667 papers)
  2. Srinivas Shakkottai (38 papers)
  3. John C. S. Lui (112 papers)
  4. Vijay Subramanian (29 papers)
Citations (44)

Summary

We haven't generated a summary for this paper yet.