Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Impatient Queuing for Intelligent Task Offloading in Multi-Access Edge Computing (2105.11727v5)

Published 25 May 2021 in cs.NI

Abstract: Multi-access edge computing (MEC) emerges as an essential part of the upcoming Fifth Generation (5G) and future beyond-5G mobile communication systems. It adds computational power towards the edge of cellular networks, much closer to energy-constrained user devices, and therewith allows the users to offload tasks to the edge computing nodes for low-latency applications with very-limited battery consumption. However, due to the high dynamics of user demand and server load, task congestion may occur at the edge nodes resulting in long queuing delay. Such delays can significantly degrade the quality of experience (QoE) of some latency-sensitive applications, raise the risk of service outage, and cannot be efficiently resolved by conventional queue management solutions. In this article, we study a latency-outage critical scenario, where users intend to limit the risk of latency outage. We propose an impatience-based queuing strategy for such users to intelligently choose between MEC offloading and local computation, allowing them to rationally renege from the task queue. The proposed approach is demonstrated by numerical simulations to be efficient for generic service model, when a perfect queue status information is available. For the practical case where the users obtain only imperfect queue status information, we design an optimal online learning strategy to enable its application in Poisson service scenarios.

Citations (18)

Summary

We haven't generated a summary for this paper yet.