Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fundamental Rate-Memory Tradeoff for Coded Caching in Presence of User Inactivity (2109.14680v1)

Published 29 Sep 2021 in cs.IT and math.IT

Abstract: Coded caching utilizes proper file subpacketization and coded delivery to make full use of the multicast opportunities in content delivery, to alleviate file transfer load in massive content delivery scenarios. Most existing work considers deterministic environments. An important practical topic is to characterize the impact of the uncertainty from user inactivity on coded caching. We consider a one server cache-enabled network under homogeneous file and network settings in presence of user inactivity. Unlike random or probabilistic caching studied in the literature, deterministic coded caching is considered, with the objective to minimize the worst-case backhaul load by optimizing the file subpacketization and the caching strategy. First, a coded caching method is used, where each file is split into the same type of fragments labeled using sets with fixed cardinality, and the optimality of the selected cardinality is proved. Optimal file subpacketization by splitting the file into multiple types of fragments labeled with multiple cardinalities is then discussed. We show that the closed-form optimum turns out to be given by a fixed cardinality -- optimizing for user inactivity only affects file delivery, cache placement is not affected. A decentralized version is also discussed and analyzed, where each user fills its storage independently at random without centralized coordination, and user inactivity is taken into account in file delivery. Simulation results show that the optimization based centralized coded caching scheme provides performance comparable to the ideal scenario assuming full knowledge of user inactivity in the placement phase, while decentralized caching performs slightly worse against user inactivity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Jialing Liao (5 papers)
  2. Olav Tirkkonen (40 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.