Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Safe Controller for Output Feedback Linear Systems using Model-Based Reinforcement Learning (2204.01409v1)

Published 4 Apr 2022 in eess.SY and cs.SY

Abstract: The objective of this research is to enable safety-critical systems to simultaneously learn and execute optimal control policies in a safe manner to achieve complex autonomy. Learning optimal policies via trial and error, i.e., traditional reinforcement learning, is difficult to implement in safety-critical systems, particularly when task restarts are unavailable. Safe model-based reinforcement learning techniques based on a barrier transformation have recently been developed to address this problem. However, these methods rely on full state feedback, limiting their usability in a real-world environment. In this work, an output-feedback safe model-based reinforcement learning technique based on a novel barrier-aware dynamic state estimator has been designed to address this issue. The developed approach facilitates simultaneous learning and execution of safe control policies for safety-critical linear systems. Simulation results indicate that barrier transformation is an effective approach to achieve online reinforcement learning in safety-critical systems using output feedback.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Moad Abudia (11 papers)
  2. Scott A Nivison (3 papers)
  3. Zachary I. Bell (26 papers)
  4. Rushikesh Kamalapurkar (54 papers)
  5. S M Nahid Mahmud (4 papers)

Summary

We haven't generated a summary for this paper yet.