Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 92 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 32 tok/s
GPT-5 High 40 tok/s Pro
GPT-4o 83 tok/s
GPT OSS 120B 467 tok/s Pro
Kimi K2 197 tok/s Pro
2000 character limit reached

A Framework for Time-Varying Optimization via Derivative Estimation (2403.19088v1)

Published 28 Mar 2024 in math.OC, cs.SY, and eess.SY

Abstract: Optimization algorithms have a rich and fundamental relationship with ordinary differential equations given by its continuous-time limit. When the cost function varies with time -- typically in response to a dynamically changing environment -- online optimization becomes a continuous-time trajectory tracking problem. To accommodate these time variations, one typically requires some inherent knowledge about their nature such as a time derivative. In this paper, we propose a novel construction and analysis of a continuous-time derivative estimation scheme based on "dirty-derivatives", and show how it naturally interfaces with continuous-time optimization algorithms using the language of ISS (Input-to-State Stability). More generally, we show how a simple Lyapunov redesign technique leads to provable suboptimality guarantees when composing this estimator with any well-behaved optimization algorithm for time-varying costs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. Low-power peaking-free high-gain observers. Automatica, 98:169–179, 2018.
  2. A contraction theory approach to optimization algorithms from acceleration flows. In International Conference on Artificial Intelligence and Statistics, pages 1321–1335. PMLR, 2022.
  3. Optimization and learning with information streams: Time-varying algorithms and applications. IEEE Signal Processing Magazine, 37(3):71–83, 2020.
  4. Prediction-correction interior-point method for time-varying convex optimization. IEEE Transactions on Automatic Control, 63(7):1973–1986, 2017.
  5. Time-varying convex optimization for double-integrator dynamics over a directed network. In 2016 35th Chinese Control Conference (CCC), pages 7341–7346. IEEE, 2016.
  6. An algorithm for the finite difference approximation of derivatives with arbitrary degree and order of accuracy. Journal of Computational and Applied Mathematics, 236(10):2622–2631, 2012.
  7. A sufficient condition for full linearization via dynamic state feedback. In 1986 25th IEEE Conference on Decision and Control, pages 203–208. IEEE, 1986.
  8. PID control. Springer, 2005.
  9. High-gain observers in nonlinear feedback control. International Journal of Robust and Nonlinear Control, 24(6):993–1015, 2014.
  10. H.K. Khalil. Nonlinear Systems. Macmillan Publishing Company, 1992.
  11. Closed-form expressions for the finite difference approximations of first and higher derivatives based on taylor series. Journal of Computational and Applied Mathematics, 107(2):179–193, 1999.
  12. On the selection of optimum savitzky-golay filters. IEEE transactions on signal processing, 61(2):380–391, 2012.
  13. Antonio Loría. Observers are unnecessary for output-feedback control of Lagrangian systems. IEEE Transactions on Automatic Control, 61(4):905–920, 2015.
  14. A dynamical systems perspective on nesterov acceleration. In International Conference on Machine Learning, pages 4656–4662. PMLR, 2019.
  15. Kailash C Patidar. On the use of nonstandard finite difference methods. Journal of Difference Equations and Applications, 11(8):735–758, 2005.
  16. A Yu Popkov. Gradient methods for nonstationary unconstrained optimization problems. Automation and Remote Control, 66:883–891, 2005.
  17. Distributed continuous-time convex optimization with time-varying cost functions. IEEE Transactions on Automatic Control, 62(4):1590–1605, 2017.
  18. ODE discretization schemes as optimization algorithms. In 2022 IEEE 61st Conference on Decision and Control (CDC), pages 6318–6325. IEEE, 2022.
  19. Time-varying convex optimization: Time-structured algorithms and applications. Proceedings of the IEEE, 108(11):2032–2048, 2020.
  20. A class of prediction-correction methods for time-varying convex optimization. IEEE Transactions on Signal Processing, 64(17):4576–4591, 2016.
  21. Optimization in engineering. Cham: Springer International Publishing, 120, 2017.
  22. Eduardo D. Sontag. Input to State Stability: Basic Concepts and Results, pages 163–220. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.
  23. Notions of input to output stability. Systems & Control Letters, 38(4-5):235–248, 1999.
  24. Tracking slowly moving clairvoyant: Optimal dynamic regret of online learning with true and noisy gradient. In International Conference on Machine Learning, pages 449–457. PMLR, 2016.
Citations (2)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.