Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 88 tok/s Pro
GPT OSS 120B 471 tok/s Pro
Kimi K2 207 tok/s Pro
2000 character limit reached

Constrained Deep Reinforcement Learning for Fronthaul Compression Optimization (2309.15060v2)

Published 26 Sep 2023 in eess.SY and cs.SY

Abstract: In the Centralized-Radio Access Network (C-RAN) architecture, functions can be placed in the central or distributed locations. This architecture can offer higher capacity and cost savings but also puts strict requirements on the fronthaul (FH). Adaptive FH compression schemes that adapt the compression amount to varying FH traffic are promising approaches to deal with stringent FH requirements. In this work, we design such a compression scheme using a model-free off policy deep reinforcement learning algorithm which accounts for FH latency and packet loss constraints. Furthermore, this algorithm is designed for model transparency and interpretability which is crucial for AI trustworthiness in performance critical domains. We show that our algorithm can successfully choose an appropriate compression scheme while satisfying the constraints and exhibits a roughly 70\% increase in FH utilization compared to a reference scheme.

Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.