Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model and Machine Learning based Caching and Routing Algorithms for Cache-enabled Networks (2004.06787v1)

Published 14 Apr 2020 in cs.NI

Abstract: In-network caching is likely to become an integral part of various networked systems (e.g., 5G networks, LPWAN and IoT systems) in the near future. In this paper, we compare and contrast model-based and machine learning approaches for designing caching and routing strategies to improve cache network performance (e.g., delay, hit rate). We first outline the key principles used in the design of model-based strategies and discuss the analytical results and bounds obtained for these approaches. By conducting experiments on real-world traces and networks, we identify the interplay between content popularity skewness and request stream correlation as an important factor affecting cache performance. With respect to routing, we show that the main factors impacting performance are alternate path routing and content search. We then discuss the applicability of multiple machine learning models, specifically reinforcement learning, deep learning, transfer learning and probabilistic graphical models for the caching and routing problem.

Citations (3)

Summary

We haven't generated a summary for this paper yet.