Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Agent Reinforcement Learning for Maritime Operational Technology Cyber Security (2401.10149v1)

Published 18 Jan 2024 in cs.LG, cs.CR, and cs.MA

Abstract: This paper demonstrates the potential for autonomous cyber defence to be applied on industrial control systems and provides a baseline environment to further explore Multi-Agent Reinforcement Learning's (MARL) application to this problem domain. It introduces a simulation environment, IPMSRL, of a generic Integrated Platform Management System (IPMS) and explores the use of MARL for autonomous cyber defence decision-making on generic maritime based IPMS Operational Technology (OT). OT cyber defensive actions are less mature than they are for Enterprise IT. This is due to the relatively brittle nature of OT infrastructure originating from the use of legacy systems, design-time engineering assumptions, and lack of full-scale modern security controls. There are many obstacles to be tackled across the cyber landscape due to continually increasing cyber-attack sophistication and the limitations of traditional IT-centric cyber defence solutions. Traditional IT controls are rarely deployed on OT infrastructure, and where they are, some threats aren't fully addressed. In our experiments, a shared critic implementation of Multi Agent Proximal Policy Optimisation (MAPPO) outperformed Independent Proximal Policy Optimisation (IPPO). MAPPO reached an optimal policy (episode outcome mean of 1) after 800K timesteps, whereas IPPO was only able to reach an episode outcome mean of 0.966 after one million timesteps. Hyperparameter tuning greatly improved training performance. Across one million timesteps the tuned hyperparameters reached an optimal policy whereas the default hyperparameters only managed to win sporadically, with most simulations resulting in a draw. We tested a real-world constraint, attack detection alert success, and found that when alert success probability is reduced to 0.75 or 0.9, the MARL defenders were still able to win in over 97.5% or 99.5% of episodes, respectively.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (6)
  1. R. Sutton, A. Barto, Reinforcement learning: an introduction, Second edition, in: Adaptive computation and machine learning series, The MIT Press, Cambridge, Massachusetts, 2018.
  2. Mastering the game of Go with deep neural networks and tree search, Nature 529 (2016) 484–489. URL: https://doi.org/10.1038/nature16961. doi:10.1038/nature16961.
  3. CybORG: A Gym for the Development of Autonomous Cyber Agents’, arXiv 26 (2021) 2023. URL: http://arxiv.org/abs/2108.09118.
  4. Intrinsically motivated reinforcement learning for human-robot interaction in the real-world’, Neural Netw 107 (2018) 23–33,. doi:10.1016/j.neunet.2018.03.014.
  5. Data-efficient Deep Reinforcement Learning for Dexterous Manipulation (2017).
  6. Tune: A research platform for distributed model selection and training, arXiv preprint arXiv:1807.05118 (2018).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Alec Wilson (2 papers)
  2. Ryan Menzies (2 papers)
  3. Neela Morarji (1 paper)
  4. David Foster (14 papers)
  5. Marco Casassa Mont (1 paper)
  6. Esin Turkbeyler (1 paper)
  7. Lisa Gralewski (1 paper)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets