Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Learning of the Kalman Filter with Logarithmic Regret (2002.05141v1)

Published 12 Feb 2020 in cs.LG, cs.SY, eess.SY, math.OC, and stat.ML

Abstract: In this paper, we consider the problem of predicting observations generated online by an unknown, partially observed linear system, which is driven by stochastic noise. For such systems the optimal predictor in the mean square sense is the celebrated Kalman filter, which can be explicitly computed when the system model is known. When the system model is unknown, we have to learn how to predict observations online based on finite data, suffering possibly a non-zero regret with respect to the Kalman filter's prediction. We show that it is possible to achieve a regret of the order of $\mathrm{poly}\log(N)$ with high probability, where $N$ is the number of observations collected. Our work is the first to provide logarithmic regret guarantees for the widely used Kalman filter. This is achieved using an online least-squares algorithm, which exploits the approximately linear relation between future observations and past observations. The regret analysis is based on the stability properties of the Kalman filter, recent statistical tools for finite sample analysis of system identification, and classical results for the analysis of least-squares algorithms for time series. Our regret analysis can also be applied for state prediction of the hidden state, in the case of unknown noise statistics but known state-space basis. A fundamental technical contribution is that our bounds hold even for the class of non-explosive systems, which includes the class of marginally stable systems, which was an open problem for the case of online prediction under stochastic noise.

Citations (26)

Summary

We haven't generated a summary for this paper yet.