Dice Question Streamline Icon: https://streamlinehq.com

Trustworthy explainable AI for optical network automation

Develop trustworthy explainable AI techniques that provide sufficient transparency of black-box AI models used for optical network automation so that operators can understand, trust, and reliably govern AI-driven decisions.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper stresses that opaque, black-box models impede operator trust and safe deployment in optical networks. It identifies the need for explainability that goes beyond superficial explanations to deliver actionable transparency.

This aligns with the broader theme of moving toward AI systems that support human-in-the-loop interaction and principled decision-making.

References

Further, according to , key AI research challenges remain open: ($i$) Training: Lack of available training datasets from real-world network deployments, ($ii$) Learning: Lack of lifelong (i.e., continual) learning, including AI degradation detection and model adaptation to progressive distribution shift, and ($iii$) Explainability: Lack of trustworthy explainable AI (XAI) due to insufficient transparency of blackbox AI.

From Artificial Intelligence to Active Inference: The Key to True AI and 6G World Brain [Invited] (2505.10569 - Maier, 29 Apr 2025) in Section 1 (Introduction), Point Alpha — Active Inference in Optical Networks