Youla-REN: Learning Nonlinear Feedback Policies with Robust Stability Guarantees
Abstract: This paper presents a parameterization of nonlinear controllers for uncertain systems building on a recently developed neural network architecture, called the recurrent equilibrium network (REN), and a nonlinear version of the Youla parameterization. The proposed framework has "built-in" guarantees of stability, i.e., all policies in the search space result in a contracting (globally exponentially stable) closed-loop system. Thus, it requires very mild assumptions on the choice of cost function and the stability property can be generalized to unseen data. Another useful feature of this approach is that policies are parameterized directly without any constraints, which simplifies learning by a broad range of policy-learning methods based on unconstrained optimization (e.g. stochastic gradient descent). We illustrate the proposed approach with a variety of simulation examples.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.