Mobile Edge Computing: A Survey on Architecture and Computation Offloading
Introduction
The concept of Mobile Edge Computing (MEC) has emerged to address the latency challenges associated with mobile cloud computing. While conventional centralized cloud computing (CC) introduces significant execution delays due to the transit of data between the user equipment (UE) and distant servers, MEC minimizes this delay by bringing compute and storage resources closer to the UE, at the edge of the network. This proximity of resources is particularly beneficial for latency-sensitive and computationally intensive applications. The paper "Mobile Edge Computing: A Survey on Architecture and Computation Offloading" by Pavel Mach and Zdenek Becvar provides a comprehensive overview of the MEC paradigm, its architecture, use cases, resource allocation strategies, and mobility management concerns.
MEC Overview and Use Cases
MEC is designed to enhance the performance and energy efficiency of UEs by offloading computation to nearby servers. The survey categorizes MEC benefits into three major use cases:
- Consumer-oriented services: These services directly benefit end-users by enabling applications such as augmented reality (AR), virtual reality (VR), and web acceleration, which require substantial computational power and low latency.
- Operator and third-party services: These include functionalities such as IoT gateways and intelligent transportation systems (ITS), which gather and preprocess data at the network edge before forwarding it to centralized servers for further analytics.
- Network performance and QoE improvement: MEC can alleviate backhaul congestion through local content caching and improve network synchronization and throughput by optimizing radio and backhaul coordination.
MEC Architecture
Several MEC architectures have been proposed, each approaching the challenge of integrating edge computing resources into mobile networks differently. Notable architectures discussed in the paper include:
- Small Cell Cloud (SCC): Enhances small cells with computation capabilities and manages them via a Small Cell Manager (SCM), which operates either centrally or in a distributed hierarchical manner.
- Mobile Micro Clouds (MMC): Connects computing resources directly to eNBs without an explicit control entity.
- MobiScud: Integrates MEC functionalities with SDN and NFV technologies, maintaining compatibility with existing mobile network infrastructures.
- Follow Me Cloud (FMC): Ensures that cloud services dynamically follow the user's movement by leveraging distributed data centers and virtualized resources.
- CONCERT: Envisions a hierarchical distribution of computing resources to optimally balance local, regional, and central servers.
The paper also discusses the ETSI reference architecture, which includes functional components like the MEC orchestrator and platform, facilitating seamless coordination and resource management within the MEC ecosystem.
Computation Offloading
Effective computation offloading is central to maximizing the benefits of MEC. The surveyed literature divides offloading strategies into full and partial offloading, focusing on the following objectives:
- Minimizing Execution Delay: Algorithms seek to reduce the computed task's end-to-end delay while balancing local and cloud processing times (e.g., the work of Liu et al.).
- Energy Consumption: Many studies aim to minimize UE energy consumption, especially under stringent delay constraints. For instance, the work of Mao et al. incorporates dynamic voltage scaling and power optimization during offloading.
- Trade-offs Between Delay and Energy Consumption: A significant number of strategies find a balance between reducing energy consumption and minimizing execution delay (e.g., the EECO algorithm by Zhang et al.).
Key Insights on Offloading Decision
- UEs should prioritize offloading computations in scenarios with good channel quality to ensure energy savings.
- Applications with small data sets to offload and high computational demands are prime candidates for MEC offloading.
- The offloading decision needs to consider the variability in channel quality and offloading costs dynamically.
Allocation of Computing Resources
The efficient allocation of computing resources within the MEC framework is pivotal. The surveyed work categorizes strategies based on whether a single or multiple nodes handle the offloaded tasks. Key points include:
- Single Node Allocation: Approaches such as the priority-based cooperation policy by Zhao et al. focus on maximizing the applications served by a single MEC node.
- Multiple Node Allocation: Strategies such as the dynamic coalition formation by Oueis et al. consider clusters of small cells to balance the computation load and improve execution delay.
Mobility Management
Handling user mobility is critical in MEC to maintain service continuity. The paper discusses several key techniques:
- Power Control: Adjusting the transmission power of the serving cells to keep UEs connected to the same cell.
- VM Migration: Deciding whether to migrate VMs (virtual machines) closer to the UE or select optimal paths dynamically based on user movement and compute requirements. Strategies like the optimal threshold decision policy by Ksentini et al. demonstrate significant improvements in handling mobility.
Conclusion and Open Research Challenges
The paper identifies several open challenges that need addressing:
- Scalable and dynamic distribution of MEC resources across the network.
- Advanced computation offloading decision mechanisms that consider both UE energy consumption and MEC resource utilization comprehensively.
- Real-time adaptive allocation of computing resources to dynamically balance loads.
- Robust mobility management techniques that combine VM migration, power control, and path selection.
The survey highlights the potential of MEC to revolutionize mobile networking by transforming end-user experiences and enhancing network efficiency. Future research should focus on addressing the outlined challenges to fully leverage MEC's capabilities.