Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Caching Policy for Cache-enabled D2D Communications by Learning User Preference (1707.08409v2)

Published 26 Jul 2017 in cs.IT and math.IT

Abstract: Prior works in designing caching policy do not distinguish content popularity with user preference. In this paper, we illustrate the caching gain by exploiting individual user behavior in sending requests. After showing the connection between the two concepts, we provide a model for synthesizing user preference from content popularity. We then optimize the caching policy with the knowledge of user preference and active level to maximize the offloading probability for cache-enabled device-to-device communications, and develop a low-complexity algorithm to find the solution. In order to learn user preference, we model the user request behavior resorting to probabilistic latent semantic analysis, and learn the model parameters by expectation maximization algorithm. By analyzing a Movielens dataset, we find that the user preferences are less similar, and the active level and topic preference of each user change slowly over time. Based on this observation, we introduce a prior knowledge based learning algorithm for user preference, which can shorten the learning time. Simulation results show remarkable performance gain of the caching policy with user preference over existing policy with content popularity, both with realistic dataset and synthetic data validated by the real dataset.

Citations (96)

Summary

We haven't generated a summary for this paper yet.