Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modeling LRU caches with Shot Noise request processes (1411.4759v4)

Published 18 Nov 2014 in cs.PF

Abstract: In this paper we analyze Least Recently Used (LRU) caches operating under the Shot Noise requests Model (SNM). The SNM was recently proposed to better capture the main characteristics of today Video on Demand (VoD) traffic. We investigate the validity of Che's approximation through an asymptotic analysis of the cache eviction time. In particular, we provide a large deviation principle, a law of large numbers and a central limit theorem for the cache eviction time, as the cache size grows large. Finally, we derive upper and lower bounds for the "hit" probability in tandem networks of caches under Che's approximation.

Citations (37)

Summary

We haven't generated a summary for this paper yet.