Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extreme URLLC: Vision, Challenges, and Key Enablers (2001.09683v1)

Published 27 Jan 2020 in cs.IT, cs.NI, and math.IT

Abstract: Notwithstanding the significant traction gained by ultra-reliable and low-latency communication (URLLC) in both academia and 3GPP standardization, fundamentals of URLLC remain elusive. Meanwhile, new immersive and high-stake control applications with much stricter reliability, latency and scalability requirements are posing unprecedented challenges in terms of system design and algorithmic solutions. This article aspires at providing a fresh and in-depth look into URLLC by first examining the limitations of 5G URLLC, and putting forward key research directions for the next generation of URLLC, coined eXtreme ultra-reliable and low-latency communication (xURLLC). xURLLC is underpinned by three core concepts: (1) it leverages recent advances in ML for faster and reliable data-driven predictions; (2) it fuses both radio frequency (RF) and non-RF modalities for modeling and combating rare events without sacrificing spectral efficiency; and (3) it underscores the much needed joint communication and control co-design, as opposed to the communication-centric 5G URLLC. The intent of this article is to spearhead beyond-5G/6G mission-critical applications by laying out a holistic vision of xURLLC, its research challenges and enabling technologies, while providing key insights grounded in selected use cases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jihong Park (123 papers)
  2. Sumudu Samarakoon (53 papers)
  3. Hamid Shiri (6 papers)
  4. Mohamed K. Abdel-Aziz (7 papers)
  5. Takayuki Nishio (43 papers)
  6. Anis Elgabli (28 papers)
  7. Mehdi Bennis (334 papers)
Citations (90)

Summary

We haven't generated a summary for this paper yet.