Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Intelligent Resource Slicing for eMBB and URLLC Coexistence in 5G and Beyond: A Deep Reinforcement Learning Based Approach (2003.07651v3)

Published 17 Mar 2020 in cs.NI and eess.SP

Abstract: In this paper, we study the resource slicing problem in a dynamic multiplexing scenario of two distinct 5G services, namely Ultra-Reliable Low Latency Communications (URLLC) and enhanced Mobile BroadBand (eMBB). While eMBB services focus on high data rates, URLLC is very strict in terms of latency and reliability. In view of this, the resource slicing problem is formulated as an optimization problem that aims at maximizing the eMBB data rate subject to a URLLC reliability constraint, while considering the variance of the eMBB data rate to reduce the impact of immediately scheduled URLLC traffic on the eMBB reliability. To solve the formulated problem, an optimization-aided Deep Reinforcement Learning (DRL) based framework is proposed, including: 1) eMBB resource allocation phase, and 2) URLLC scheduling phase. In the first phase, the optimization problem is decomposed into three subproblems and then each subproblem is transformed into a convex form to obtain an approximate resource allocation solution. In the second phase, a DRL-based algorithm is proposed to intelligently distribute the incoming URLLC traffic among eMBB users. Simulation results show that our proposed approach can satisfy the stringent URLLC reliability while keeping the eMBB reliability higher than 90%.

Citations (215)

Summary

  • The paper introduces a DRL-based resource slicing framework that maximizes eMBB throughput while meeting URLLC's strict latency and reliability requirements.
  • It utilizes a two-phase approach with optimization for eMBB resource allocation and DRL-driven scheduling for URLLC, reformulating challenges into convex subproblems.
  • Simulation results demonstrate that the framework achieves over 90% eMBB reliability and effective risk management under dynamic network conditions.

Intelligent Resource Slicing for eMBB and URLLC Coexistence in 5G and Beyond: A Deep Reinforcement Learning Based Approach

This paper addresses the crucial problem of resource allocation within 5G networks, focusing on the coexistence of enhanced Mobile BroadBand (eMBB) and Ultra-Reliable Low Latency Communications (URLLC). The primary objective of this research is to ensure that eMBB services maintain high data rates while meeting reliability constraints dictated by URLLC's stringent latency and reliability requirements. The problem is approached by formulating a resource allocation problem as an optimization challenge, aiming to maximize eMBB data rates subject to URLLC constraints while minimizing the variance of eMBB data rates to mitigate the adverse impacts of immediate URLLC scheduling.

Core Contributions

The authors propose an optimization-aided Deep Reinforcement Learning (DRL) framework tailored to allocate resources intelligently in varying network conditions. The methodology is structured into two phases: the eMBB resource allocation phase and the URLLC scheduling phase.

  1. eMBB Resource Allocation Phase:
    • The optimization problem is initially decomposed into three interrelated subproblems: eMBB resource block allocation, power allocation, and URLLC scheduling.
    • Each subproblem is reformulated into a convex form to derive approximate solutions that balance computational complexity with accuracy.
    • Considerations of risk, defined by the variance in eMBB data rates, are integrated into the optimization model to improve eMBB reliability.
  2. URLLC Scheduling Phase:
    • A DRL-based algorithm is implemented to dynamically allocate URLLC traffic by intelligently learning from real-time environment interactions.
    • To enhance the convergence of the DRL approach, a policy gradient-based actor-critic learning algorithm is utilized.
    • The design of the reward function encapsulates the specific requirements of both eMBB and URLLC, emphasizing the dual objectives of maintaining eMBB throughput and URLLC reliability.

Numerical Results and Implications

Simulation results demonstrate the framework's capability to fulfill URLLC's stringent reliability needs while preserving eMBB reliability above 90%. The proposed solution effectively manages the risk-tail distribution of the eMBB outage probability, ensuring high reliability even in the face of unexpected URLLC demands.

The paper's findings carry significant implications for future wireless network designs, particularly in environments characterized by heterogeneous service demands. The integration of optimization methods with DRL not only provides a theoretically sound but also a practically implementable solution for resource management in next-generation networks.

By considering both optimization and learning approaches in tandem, this work paves the way for adaptable, resilient network strategies that could be indispensable in realizing the full potential of 5G and beyond. Future research directions may extend these methods to incorporate even more dynamic traffic scenarios and broader application contexts within AI-driven network management.

This paper demonstrates that a holistic approach, combining optimization techniques with reinforcement learning, can significantly enhance the performance of resource allocation systems in complex and variable networking environments, ultimately contributing to more efficient and reliable 5G networks.