Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Optimal Load-Memory Tradeoff of Cache-Aided Scalar Linear Function Retrieval (2001.03577v1)

Published 10 Jan 2020 in cs.IT and math.IT

Abstract: Coded caching has the potential to greatly reduce network traffic by leveraging the cheap and abundant storage available in end-user devices so as to create multicast opportunities in the delivery phase. In the seminal work by Maddah-Ali and Niesen (MAN), the shared-link coded caching problem was formulated, where each user demands one file (i.e., single file retrieval). This paper generalizes the MAN problem so as to allow users to request scalar linear functions of the files. This paper proposes a novel coded delivery scheme that, based on MAN uncoded cache placement, is shown to allow for the decoding of arbitrary scalar linear functions of the files (on arbitrary finite fields). Interestingly, and quite surprisingly, it is shown that the load for cache-aided scalar linear function retrieval depends on the number of linearly independent functions that are demanded, akin to the cache-aided single-file retrieval problem where the load depends on the number of distinct file requests. The proposed scheme is optimal under the constraint of uncoded cache placement, in terms of worst-case load, and within a factor 2 otherwise. The key idea of this paper can be extended to all scenarios which the original MAN scheme has been extended to, including demand-private and/or device-to-device settings.

Citations (3)

Summary

We haven't generated a summary for this paper yet.