Steady-State Error Compensation for Reinforcement Learning with Quadratic Rewards
Abstract: The selection of a reward function in Reinforcement Learning (RL) has garnered significant attention because of its impact on system performance. Issues of significant steady-state errors often manifest when quadratic reward functions are employed. Although absolute-value-type reward functions alleviate this problem, they tend to induce substantial fluctuations in specific system states, leading to abrupt changes. In response to this challenge, this study proposes an approach that introduces an integral term. By integrating this integral term into quadratic-type reward functions, the RL algorithm is adeptly tuned, augmenting the system's consideration of reward history, and consequently alleviates concerns related to steady-state errors. Through experiments and performance evaluations on the Adaptive Cruise Control (ACC) and lane change models, we validate that the proposed method effectively diminishes steady-state errors and does not cause significant spikes in some system states.
- K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “A brief survey of deep reinforcement learning,” arXiv preprint arXiv:1708.05866, 2017.
- V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015.
- Y. Li, “Deep reinforcement learning: An overview,” arXiv preprint arXiv:1701.07274, 2017.
- B. Kiumarsi, K. G. Vamvoudakis, H. Modares, and F. L. Lewis, “Optimal and autonomous control using reinforcement learning: A survey,” IEEE transactions on neural networks and learning systems, vol. 29, no. 6, pp. 2042–2062, 2017.
- R. S. Sutton and A. G. Barto, “Reinforcement learning: An introduction,” Robotica, vol. 17, no. 2, pp. 229–235, 1999.
- L. Matignon, G. J. Laurent, and N. Le Fort-Piat, “Reward function and initial values: Better choices for accelerated goal-directed reinforcement learning,” in International Conference on Artificial Neural Networks. Springer, 2006, pp. 840–849.
- S. Booth, W. B. Knox, J. Shah, S. Niekum, P. Stone, and A. Allievi, “The perils of trial-and-error reward design: misdesign through overfitting and invalid task specifications,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 5, 2023, pp. 5920–5929.
- J. Eschmann, “Reward function design in reinforcement learning,” Reinforcement Learning Algorithms: Analysis and Applications, pp. 25–33, 2021.
- Y. Luo, Y. Wang, K. Dong, Y. Liu, Z. Sun, Q. Zhang, and B. Song, “D2sr: Transferring dense reward function to sparse by network resetting,” in 2023 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, 2023, pp. 906–911.
- J.-M. Engel and R. Babuška, “On-line reinforcement learning for nonlinear motion control: Quadratic and non-quadratic reward functions,” IFAC Proceedings Volumes, vol. 47, no. 3, pp. 7043–7048, 2014.
- V. François-Lavet, P. Henderson, R. Islam, M. G. Bellemare, J. Pineau et al., “An introduction to deep reinforcement learning,” Foundations and Trends® in Machine Learning, vol. 11, no. 3-4, pp. 219–354, 2018.
- T. Lilicrap, J. Hunt, A. Pritzel, N. Hess, T. Erez, D. Silver, Y. Tassa, and D. Wiestra, “Continuous control with deep reinforcement learning,” in International Conference on Learning Representation (ICLR), 2016.
- Y. Lin, J. McPhee, and N. L. Azad, “Comparison of deep reinforcement learning and model predictive control for adaptive cruise control,” IEEE Transactions on Intelligent Vehicles, vol. 6, no. 2, pp. 221–231, 2020.
- Q. Ge, Q. Sun, S. E. Li, S. Zheng, W. Wu, and X. Chen, “Numerically stable dynamic bicycle model for discrete-time control,” in 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops). IEEE, 2021, pp. 128–134.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.