Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Technical Report: Adaptive Control for Linearizable Systems Using On-Policy Reinforcement Learning (2004.02766v1)

Published 6 Apr 2020 in cs.LG, math.DS, math.OC, and stat.ML

Abstract: This paper proposes a framework for adaptively learning a feedback linearization-based tracking controller for an unknown system using discrete-time model-free policy-gradient parameter update rules. The primary advantage of the scheme over standard model-reference adaptive control techniques is that it does not require the learned inverse model to be invertible at all instances of time. This enables the use of general function approximators to approximate the linearizing controller for the system without having to worry about singularities. However, the discrete-time and stochastic nature of these algorithms precludes the direct application of standard machinery from the adaptive control literature to provide deterministic stability proofs for the system. Nevertheless, we leverage these techniques alongside tools from the stochastic approximation literature to demonstrate that with high probability the tracking and parameter errors concentrate near zero when a certain persistence of excitation condition is satisfied. A simulated example of a double pendulum demonstrates the utility of the proposed theory. 1

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tyler Westenbroek (18 papers)
  2. Eric Mazumdar (37 papers)
  3. David Fridovich-Keil (73 papers)
  4. Valmik Prabhu (3 papers)
  5. Claire J. Tomlin (101 papers)
  6. S. Shankar Sastry (77 papers)

Summary

We haven't generated a summary for this paper yet.