Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Coded Caching with Correlated Files (1901.05732v3)

Published 17 Jan 2019 in cs.IT and math.IT

Abstract: This paper studies the fundamental limits of the shared-link coded caching problem with correlated files, where a server with a library of $N$ files communicates with $K$ users who can locally cache $M$ files. Given an integer $r \in [N]$, correlation is modeled as follows: each r-subset of files contains a unique common block. The tradeoff between the cache size and the average transmitted load is considered. First, a converse bound under the constraint of uncoded cache placement (i.e., each user directly stores a subset of the library bits) is derived. Then, a caching scheme for the case where every user demands a distinct file (possible for $N \geq K$) is shown to be optimal under the constraint of uncoded cache placement. This caching scheme is further proved to be decodable and optimal under the constraint of uncoded cache placement when (i) $KrM \leq 2N$ or $KrM \geq (K - 1)N $or $r \in {1,2,N- 1,N}$ for every demand type (i.e., when the demanded file are not necessarily distinct), and (ii) when the number of distinct demanded files is no larger than four. Finally, a two-phase delivery scheme based on interference alignment is shown to be optimal to within a factor of 2 under the constraint of uncoded cache placement for every possible demands. As a by-product, the proposed interference alignment scheme is shown to reduce the (worst-case or average) load of state-of-the-art schemes for the coded caching problem where the users can request multiple files.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kai Wan (67 papers)
  2. Daniela Tuninetti (89 papers)
  3. Mingyue Ji (86 papers)
  4. Giuseppe Caire (358 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.