Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Framework for Semantic In-network Caching and Prefetching in 5G Mobile Networks (1711.10154v1)

Published 28 Nov 2017 in cs.NI

Abstract: Recent popularity of mobile devices increased the demand for mobile network services and applications that require minimal delay. 5G mobile networks are expected to provide much lesser delay than the present mobile networks. One of the conventional ways for decreasing latency is caching content closer to end users. However, currently deployed methods are not effective enough. In this paper, we propose a new astute in-network caching framework that is able to smartly predict subsequent user requests and prefetch necessary contents to remarkably decrease the end-to-end latency in 5G mobile networks. We employ semantic inference by edge computing, deduce what the end-user may request in the sequel and prefetch the content. We validate the proposed technique by emulations, compare it with the state of the art and present impressive gains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Can Mehteroğlu (1 paper)
  2. Yunus Durmuş (1 paper)
  3. Ertan Onur (7 papers)

Summary

We haven't generated a summary for this paper yet.