Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Caching under Content Freshness Constraints (1712.10041v1)

Published 28 Dec 2017 in cs.NI

Abstract: Several real-time delay-sensitive applications pose varying degrees of freshness demands on the requested content. The performance of cache replacement policies that are agnostic to these demands is likely to be sub-optimal. Motivated by this concern, in this paper, we study caching policies under a request arrival process which incorporates user freshness demands. We consider the performance metric to be the steady-state cache hit probability. We first provide a universal upper bound on the performance of any caching policy. We then analytically obtain the content-wise hit-rates for the Least Popular (LP) policy and provide sufficient conditions for the asymptotic optimality of cache performance under this policy. Next, we obtain an accurate approximation for the LRU hit-rates in the regime of large content population. To this end, we map the characteristic time of a content in the LRU policy to the classical Coupon Collector's Problem and show that it sharply concentrates around its mean. Further, we develop modified versions of these policies which eject cache redundancies present in the form of stale contents. Finally, we propose a new policy which outperforms the above policies by explicitly using freshness specifications of user requests to prioritize among the cached contents. We corroborate our analytical insights with extensive simulations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Pawan Poojary (3 papers)
  2. Sharayu Moharir (30 papers)
  3. Krishna Jagannathan (42 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.