Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal Edge Caching For Individualized Demand Dynamics (2310.14631v1)

Published 23 Oct 2023 in cs.NI

Abstract: The ever-growing end user data demands, and the simultaneous reductions in memory costs are fueling edge-caching deployments. Caching at the edge is substantially different from that at the core and needs to take into account the nature of individual data demands. For example, an individual user may not be interested in requesting the same data item again, if it has recently requested it. Such individual dynamics are not apparent in the aggregated data requests at the core and have not been considered in popularity-driven caching designs for the core. Hence, these traditional caching policies could induce significant inefficiencies when applied at the edges. To address this issue, we develop new edge caching policies optimized for the individual demands that also leverage overhearing opportunities at the wireless edge. With the objective of maximizing the hit ratio, the proposed policies will actively evict the data items that are not likely to be requested in the near future, and strategically bring them back into the cache through overhearing when they are likely to be popular again. Both theoretical analysis and numerical simulations demonstrate that the proposed edge caching policies could outperform the popularity-driven policies that are optimal at the core.

Citations (1)

Summary

We haven't generated a summary for this paper yet.