Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Regulating Safety and Security in Autonomous Robotic Systems (2007.08006v1)

Published 9 Jul 2020 in cs.CY

Abstract: Autonomous Robotics Systems are inherently safety-critical and have complex safety issues to consider (for example, a safety failure can lead to a safety failure). Before they are deployed, these systems of have to show evidence that they adhere to a set of regulator-defined rules for safety and security. Formal methods provide robust approaches to proving a system obeys given rules, but formalising (usually natural language) rules can prove difficult. Regulations specifically for autonomous systems are still being developed, but the safety rules for a human operator are a good starting point when trying to show that an autonomous system is safe. For applications of autonomous systems like driverless cars and pilotless aircraft, there are clear rules for human operators, which have been formalised and used to prove that an autonomous system obeys some or all of these rules. However, in the space and nuclear sectors applications are more likely to differ, so a set of general safety principles has developed. This allows novel applications to be assessed for their safety, but are difficult to formalise. To improve this situation, we are collaborating with regulators and the community in the space and nuclear sectors to develop guidelines for autonomous and robotic systems that are amenable to robust (formal) verification. These activities also have the benefit of bridging the gaps in knowledge within both the space or nuclear communities and academia.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Matt Luckcuck (26 papers)
  2. Marie Farrell (20 papers)