Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Reinforcement Learning for Mitigating Intermittent Interference in Terahertz Communication Networks (2003.04832v1)

Published 10 Mar 2020 in eess.SP, cs.LG, and cs.NI

Abstract: Emerging wireless services with extremely high data rate requirements, such as real-time extended reality applications, mandate novel solutions to further increase the capacity of future wireless networks. In this regard, leveraging large available bandwidth at terahertz frequency bands is seen as a key enabler. To overcome the large propagation loss at these very high frequencies, it is inevitable to manage transmissions over highly directional links. However, uncoordinated directional transmissions by a large number of users can cause substantial interference in terahertz networks. While such interference will be received over short random time intervals, the received power can be large. In this work, a new framework based on reinforcement learning is proposed that uses an adaptive multi-thresholding strategy to efficiently detect and mitigate the intermittent interference from directional links in the time domain. To find the optimal thresholds, the problem is formulated as a multidimensional multi-armed bandit system. Then, an algorithm is proposed that allows the receiver to learn the optimal thresholds with very low complexity. Another key advantage of the proposed approach is that it does not rely on any prior knowledge about the interference statistics, and hence, it is suitable for interference mitigation in dynamic scenarios. Simulation results confirm the superior bit-error-rate performance of the proposed method compared with two traditional time-domain interference mitigation approaches.

Citations (17)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.