Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
114 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
35 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Can Transformers Learn Optimal Filtering for Unknown Systems? (2308.08536v3)

Published 16 Aug 2023 in eess.SY, cs.AI, cs.LG, and cs.SY

Abstract: Transformer models have shown great success in natural language processing; however, their potential remains mostly unexplored for dynamical systems. In this work, we investigate the optimal output estimation problem using transformers, which generate output predictions using all the past ones. Particularly, we train the transformer using various distinct systems and then evaluate the performance on unseen systems with unknown dynamics. Empirically, the trained transformer adapts exceedingly well to different unseen systems and even matches the optimal performance given by the Kalman filter for linear systems. In more complex settings with non-i.i.d. noise, time-varying dynamics, and nonlinear dynamics like a quadrotor system with unknown parameters, transformers also demonstrate promising results. To support our experimental findings, we provide statistical guarantees that quantify the amount of training data required for the transformer to achieve a desired excess risk. Finally, we point out some limitations by identifying two classes of problems that lead to degraded performance, highlighting the need for caution when using transformers for control and estimation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. R. E. Kalman, “A new approach to recursive filtering and prediction problems,” Trans. ASME, vol. 82, pp. 35–45, 1960.
  2. P. Del Moral, “Nonlinear filtering: Interacting particle resolution,” Comptes Rendus de l’Académie des Sciences-Series I-Mathematics, vol. 325, no. 6, pp. 653–658, 1997.
  3. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” NeurIPS, vol. 30, 2017.
  4. Y. Li, M. E. Ildiz, D. Papailiopoulos, and S. Oymak, “Transformers as algorithms: Generalization and stability in in-context learning,” International Conference on Machine Learning, 2023.
  5. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” NeurIPS, vol. 33, pp. 1877–1901, 2020.
  6. J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen, “What makes good in-context examples for GPT-3333?” arXiv preprint arXiv:2101.06804, 2021.
  7. Z. Zhao, E. Wallace, S. Feng, D. Klein, and S. Singh, “Calibrate before use: Improving few-shot performance of language models,” in ICML.   PMLR, 2021, pp. 12 697–12 706.
  8. S. Garg, D. Tsipras, P. S. Liang, and G. Valiant, “What can transformers learn in-context? a case study of simple function classes,” NeurIPS, vol. 35, pp. 30 583–30 598, 2022.
  9. M. Laskin, L. Wang, J. Oh, E. Parisotto, S. Spencer, R. Steigerwald, D. Strouse, S. Hansen, A. Filos, E. Brooks et al., “In-context reinforcement learning with algorithm distillation,” arXiv preprint arXiv:2210.14215, 2022.
  10. B. Yi, R. Wang, and I. R. Manchester, “Reduced-order nonlinear observers via contraction analysis and convex optimization,” IEEE Trans. on Automatic Control, vol. 67, no. 8, pp. 4045–4060, 2022.
  11. B. Chen, J. Liang, N. Zheng, and J. C. Príncipe, “Kernel least mean square with adaptive kernel size,” Neurocomputing, vol. 191, pp. 95–106, 2016.
  12. M. Scarpiniti, D. Comminiello, R. Parisi, and A. Uncini, “Nonlinear spline adaptive filtering,” Signal Processing, vol. 93, no. 4, pp. 772–783, 2013.
  13. O. Anava, E. Hazan, S. Mannor, and O. Shamir, “Online learning for time series prediction,” in COLT.   PMLR, 2013, pp. 172–184.
  14. A. Tsiamis, N. Matni, and G. Pappas, “Sample complexity of Kalman filtering for unknown systems,” in L4DC.   PMLR, 2020, pp. 435–444.
  15. M. Kozdoba, J. Marecek, T. Tchrakian, and S. Mannor, “On-line learning of linear dynamical systems: Exponential forgetting in Kalman filters,” in AAAI, vol. 33, no. 01, 2019, pp. 4098–4105.
  16. A. Tsiamis and G. J. Pappas, “Online learning of the Kalman filter with logarithmic regret,” IEEE Trans. on Automatic Control, 2022.
  17. J. Umenberger, M. Simchowitz, J. Perdomo, K. Zhang, and R. Tedrake, “Globally convergent policy search for output estimation,” NeurIPS, vol. 35, pp. 22 778–22 790, 2022.
  18. W. Lohmiller and J.-J. E. Slotine, “On contraction analysis for non-linear systems,” Automatica, vol. 34, no. 6, pp. 683–696, 1998.
  19. D. Angeli, “A Lyapunov approach to incremental stability properties,” IEEE Trans. on Automatic Control, vol. 47, no. 3, pp. 410–421, 2002.
  20. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
  21. S. Singh, B. Landry, A. Majumdar, J.-J. Slotine, and M. Pavone, “Robust feedback motion planning via contraction theory,” The International Journal of Robotics Research, 2019.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.