Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal PID and Antiwindup Control Design as a Reinforcement Learning Problem (2005.04539v1)

Published 10 May 2020 in math.OC, cs.LG, cs.SY, and eess.SY

Abstract: Deep reinforcement learning (DRL) has seen several successful applications to process control. Common methods rely on a deep neural network structure to model the controller or process. With increasingly complicated control structures, the closed-loop stability of such methods becomes less clear. In this work, we focus on the interpretability of DRL control methods. In particular, we view linear fixed-structure controllers as shallow neural networks embedded in the actor-critic framework. PID controllers guide our development due to their simplicity and acceptance in industrial practice. We then consider input saturation, leading to a simple nonlinear control structure. In order to effectively operate within the actuator limits we then incorporate a tuning parameter for anti-windup compensation. Finally, the simplicity of the controller allows for straightforward initialization. This makes our method inherently stabilizing, both during and after training, and amenable to known operational PID gains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nathan P. Lawrence (20 papers)
  2. Gregory E. Stewart (2 papers)
  3. Michael G. Forbes (13 papers)
  4. R. Bhushan Gopaluni (22 papers)
  5. Philip D. Loewen (14 papers)
  6. Johan U. Backstrom (4 papers)
Citations (22)