Papers
Topics
Authors
Recent
Search
2000 character limit reached

A finite-sample generalization bound for stable LPV systems

Published 16 May 2024 in cs.LG, cs.SY, and eess.SY | (2405.10054v3)

Abstract: One of the main theoretical challenges in learning dynamical systems from data is providing upper bounds on the generalization error, that is, the difference between the expected prediction error and the empirical prediction error measured on some finite sample. In machine learning, a popular class of such bounds are the so-called Probably Approximately Correct (PAC) bounds. In this paper, we derive a PAC bound for stable continuous-time linear parameter-varying (LPV) systems. Our bound depends on the H2 norm of the chosen class of the LPV systems, but does not depend on the time interval for which the signals are considered.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. P. Alquier. User-friendly introduction to pac-bayes bounds. arXiv preprint arXiv:2110.11216, 2021.
  2. A. C. Antoulas. Approximation of large-scale dynamical systems. SIAM, 2005.
  3. P. Bilingsley. Probability and measure. Wiley, 1986.
  4. M. C. Campi and E. Weyer. Finite sample properties of system identification methods. IEEE TACON, 47(8):1329–1334, 2002.
  5. On generalization bounds of a family of recurrent neural networks. In AISTATS, 2020.
  6. Neural ordinary differential equations. NeurIPS, 31, 2018.
  7. Framing rnn as a kernel method: A neural ode approach. NeurIPS, 2021.
  8. D. Foster and M. Simchowitz. Logarithmic regret for adversarial online control. In ICML, PMLR. PMLR, 2020.
  9. Reduced-order modeling of lpv systems in the loewner framework. In CDC, 2021.
  10. A. Gu and T. Dao. Mamba: Linear-time sequence modeling with selective state spaces, 2023.
  11. Combining recurrent, convolutional, and continuous-time models with linear state space layers. In NeurIPS, 2021.
  12. How to train your HIPPO: State space models with generalized orthogonal basis projections. In ICLR, 2023.
  13. J. Hanson and M. Raginsky. Rademacher complexity of neural odes via chen-fliess series. arXiv preprint arXiv:2401.16655, 2024.
  14. Learning recurrent neural net models of nonlinear systems. In L4DC, 2021.
  15. A. Isidori. Nonlinear control systems: an introduction. Springer, 1985.
  16. Generalization error bounds for deep unfolding rnns. In UAI, 2021.
  17. P. Kidger. On neural differential equations. arXiv preprint arXiv:2202.02435, 2022.
  18. P. Koiran and E. D. Sontag. Vapnik-chervonenkis dimension of recurrent neural networks. Discrete Applied Mathematics, 86(1):63–79, 1998.
  19. A. Krener. Bilinear and nonlinear realization of input-output maps. SIAM Journal on Control, 13(4), 1974.
  20. Learning complexity dimensions for a continuous-time control system. SIAM journal on control and optimization, 43(3):872–898, 2004.
  21. Logarithmic regret bound in partially observable linear dynamical systems. Advances in NeurIPS, 33:20876–20888, 2020.
  22. On the curse of memory in recurrent neural networks: Approximation and optimization analysis. In ICLR, 2021.
  23. L. Ljung. System identification. In Signal analysis and prediction, pages 163–173. Springer, 1998.
  24. P. Marion. Generalization bounds for neural ordinary differential equations and deep residual networks. arXiv preprint arXiv:2305.06648, 2023.
  25. Dissecting neural odes. NeurIPS, 2020.
  26. Resurrecting recurrent neural networks for long sequences. arXiv preprint arXiv:2303.06349, 2023.
  27. S. Oymak and N. Ozay. Revisiting ho–kalman-based system identification: Robustness and finite-sample analysis. IEEE TACON, 67(4):1914–1928, 4 2022.
  28. C. W. Scherer. Robust controller synthesis is convex for systems without control channel uncertainties. In Model-Based Control: Bridging Rigorous Theory and Advanced Technology, pages 13–31. Springer, 2009.
  29. S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
  30. Learning linear dynamical systems with semi-parametric least squares. In COLT, 2019.
  31. E. D. Sontag. A learning result for continuous-time recurrent neural networks. Systems & control letters, 34(3):151–158, 1998.
  32. R. Tóth. Modeling and identification of linear parameter-varying systems, volume 403. Springer, 2010.
  33. Statistical learning theory for control: A finite-sample perspective. IEEE Control Systems Magazine, 43(6):67–97, 2023.
  34. M. Vidyasagar and R. L. Karandikar. A learning theory approach to system identification and stochastic adaptive control. Probabilistic and randomized methods for design under uncertainty, pages 265–302, 2006.
  35. S. Wang and B. Xue. State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory. In NeurIPS, 2023.
  36. C. Wei and T. Ma. Data-dependent sample complexity of deep neural networks via lipschitz augmentation. In NeurIPS, 2019.
  37. L. Zhang and J. Lam. On ℋ2subscriptℋ2\mathcal{H}_{2}caligraphic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT model reduction of bilinear systems. Automatica, 38:205–216, 2002.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.