Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Partially Observed Linear Dynamical Systems from Logarithmic Number of Samples (2010.04015v1)

Published 8 Oct 2020 in cs.LG, cs.SY, eess.SY, math.OC, and stat.ML

Abstract: In this work, we study the problem of learning partially observed linear dynamical systems from a single sample trajectory. A major practical challenge in the existing system identification methods is the undesirable dependency of their required sample size on the system dimension: roughly speaking, they presume and rely on sample sizes that scale linearly with respect to the system dimension. Evidently, in high-dimensional regime where the system dimension is large, it may be costly, if not impossible, to collect as many samples from the unknown system. In this paper, we will remedy this undesirable dependency on the system dimension by introducing an $\ell_1$-regularized estimation method that can accurately estimate the Markov parameters of the system, provided that the number of samples scale logarithmically with the system dimension. Our result significantly improves the sample complexity of learning partially observed linear dynamical systems: it shows that the Markov parameters of the system can be learned in the high-dimensional setting, where the number of samples is significantly smaller than the system dimension. Traditionally, the $\ell_1$-regularized estimators have been used to promote sparsity in the estimated parameters. By resorting to the notion of "weak sparsity", we show that, irrespective of the true sparsity of the system, a similar regularized estimator can be used to reduce the sample complexity of learning partially observed linear systems, provided that the true system is inherently stable.

Citations (12)

Summary

We haven't generated a summary for this paper yet.