Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

User Preference Learning Based Edge Caching for Fog Radio Access Network (1801.06449v2)

Published 9 Jan 2018 in cs.NI, cs.IT, and math.IT

Abstract: In this paper, the edge caching problem in fog radio access network (F-RAN) is investigated. By maximizing the overall cache hit rate, the edge caching optimization problem is formulated to find the optimal policy. Content popularity in terms of time and space is considered from the perspective of regional users. We propose an online content popularity prediction algorithm by leveraging the content features and user preferences, and an offline user preference learning algorithm by using the {online gradient descent} (OGD) method and the {follow the (proximally) regularized leader} (FTRL-Proximal) method. Our proposed edge caching policy not only can promptly predict the future content popularity in an online fashion with low complexity, {but also} can track the content popularity with spatial and temporal popularity dynamic in time without delay. Furthermore, we design two learning based edge caching architectures. Moreover, we theoretically derive the upper bound of the popularity prediction error, the lower bound of the cache hit rate, and the regret bound of the overall cache hit rate of our proposed edge caching policy. Simulation results show that the overall cache hit rate of our proposed policy is superior to those of the traditional policies and asymptotically approaches the optimal performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yanxiang Jiang (29 papers)
  2. Miaoli Ma (2 papers)
  3. Mehdi Bennis (333 papers)
  4. Fu-Chun Zheng (34 papers)
  5. Xiaohu You (177 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.