Papers
Topics
Authors
Recent
2000 character limit reached

A Deep-Reinforcement Learning Approach for Software-Defined Networking Routing Optimization (1709.07080v1)

Published 20 Sep 2017 in cs.NI and cs.AI

Abstract: In this paper we design and evaluate a Deep-Reinforcement Learning agent that optimizes routing. Our agent adapts automatically to current traffic conditions and proposes tailored configurations that attempt to minimize the network delay. Experiments show very promising performance. Moreover, this approach provides important operational advantages with respect to traditional optimization algorithms.

Citations (163)

Summary

Overview of Deep-Reinforcement Learning for SDN Routing Optimization

The paper "A Deep-Reinforcement Learning Approach for Software-Defined Networking Routing Optimization" presents an innovative application of Deep-Reinforcement Learning (DRL) within the context of Knowledge-Defined Networking (KDN). Given the recent integration of Software-Defined Networking (SDN) and Network Analytics (NA), the authors explore how DRL can be leveraged to optimize routing configurations with a focus on minimizing network delay.

Introduction to the Approach

This research investigates a DRL agent designed to automatically adapt to current network conditions, proposing configurations aimed at reducing delay. Unlike conventional optimization methods, the DRL agent provides significant operational advantages. It does not rely on predefined heuristics or analytical models, making it versatile for real-time network control.

State of the Art and Contribution

Routing optimization has traditionally depended upon analytical and heuristic-based approaches. The advent of SDN has expanded the scope of feasible solutions, yet these traditional methods often struggle with generalizing across unseen network states. The paper posits that DRL offers a novel solution for routing optimization, providing near-optimal configurations in a single computational step post-training. This approach marks the first known attempt at applying DRL techniques in this manner within network routing.

The DRL Agent Architecture

The proposed DRL agent functions as an off-policy, actor-critic, deterministic policy gradient algorithm. Within its operational environment, it utilizes three key signals: state, action, and reward.

  • State: Represented by the Traffic Matrix (TM), defining bandwidth requests between nodes.
  • Action: A tuple of link-weights, determining pathways for source-destination node pairs.
  • Reward: Calculated using the mean network delay.

Two deep neural networks underpin the agent's methodology: one for the actor, which generates actions, and another for the critic, which assesses actions based on the reward.

Experimental Evaluation and Results

The DRL agent was evaluated in a simulated environment using a scale-free network topology of 14 nodes and 21 duplex links. Multiple traffic intensity levels were tested, from 12.5% to 125% of total network capacity, resulting in 1,000 unique traffic configurations.

Key findings include:

  1. Training Performance: As evidenced in the learning progress, the DRL agent's efficiency improved with extended training episodes.
  2. Benchmark Comparison: Throughout various traffic intensity levels, the trained DRL agent consistently achieved routing configurations that performed within the top quartile of the benchmark, surpassing randomly-generated configurations.

These empirical results highlight the DRL agent's capability in effectively minimizing network delay compared to traditional routing methods.

Discussion and Implications

This paper showcases the potential of DRL to enhance SDN routing efficiency, particularly in dynamic, real-time applications:

  • One-step Optimization: The ability to produce optimized configurations in a single step offers significant advantages for network interventions.
  • Model-free Adaptability: Unlike formulaic optimization algorithms, DRL agents learn from experience, allowing for dynamic adjustments without system simplifications.
  • Black-box Optimization: The inherent adaptability of DRL enables easy modifications in policy objectives via reward configurations, streamlining algorithm development.

Future work may extend these insights by exploring broader benchmarks and complex topologies to further validate and refine this DRL approach in practical settings. As AI continues to advance, these developments could play a critical role in the evolution of autonomous network management.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.