Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Matrix Concentration Inequalities for Sensor Selection (2403.06032v1)

Published 9 Mar 2024 in eess.SY and cs.SY

Abstract: In this work, we address the problem of sensor selection for state estimation via Kalman filtering. We consider a linear time-invariant (LTI) dynamical system subject to process and measurement noise, where the sensors we use to perform state estimation are randomly drawn according to a sampling with replacement policy. Since our selection of sensors is randomly chosen, the estimation error covariance of the Kalman filter is also a stochastic quantity. Fortunately, concentration inequalities (CIs) for the estimation error covariance allow us to gauge the estimation performance we intend to achieve when our sensors are randomly drawn with replacement. To obtain a non-trivial improvement on existing CIs for the estimation error covariance, we first present novel matrix CIs for a sum of independent and identically-distributed (i.i.d.) and positive semi-definite (p.s.d.) random matrices, whose support is finite. Next, we show that our guarantees generalize an existing matrix CI. Also, we show that our generalized guarantees require significantly fewer number of sampled sensors to be applicable. Lastly, we show through a numerical study that our guarantees significantly outperform existing ones in terms of their ability to bound (in the semi-definite sense) the steady-state estimation error covariance of the Kalman filter.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. C. I. Calle and S. D. Bopardikar, “A concentration-based approach for optimizing the estimation performance in stochastic sensor selection,” IEEE Transactions on Control of Network Systems, pp. 1–12, 2023.
  2. R. Ahlswede and A. Winter, “Strong converse for identification via quantum channels,” IEEE Transactions on Information Theory, vol. 48, no. 3, pp. 569–579, 2002.
  3. J. A. Tropp, “An introduction to matrix concentration inequalities,” Foundations and Trends® in Machine Learning, vol. 8, no. 1-2, pp. 1–230, 2015.
  4. R. Qiu and M. Wicks, Cognitive networked sensing and big data. Springer, 2014.
  5. N. Harvey, “Useful versions of the Ahlswede-Winter Inequality,” 2011. University of British Columbia.
  6. S. D. Bopardikar, “A randomized approach to sensor placement with observability assurance,” Automatica, vol. 123, p. 109340, 2021.
  7. M. Siami, A. Olshevsky, and A. Jadbabaie, “Deterministic and randomized actuator scheduling with guaranteed performance bounds,” IEEE Transactions on Automatic Control, vol. 66, no. 4, pp. 1686–1701, 2020.
  8. A. Amini, H. K. Mousavi, Q. Sun, and N. Motee, “Space-time sampling for network observability,” IEEE Transactions on Control of Network Systems, 2022.
  9. Springer, 2016.
  10. H. Zhang, R. Ayoub, and S. Sundaram, “Sensor selection for Kalman filtering of linear dynamical systems: Complexity, limitations and greedy algorithms,” Automatica, vol. 78, pp. 202–210, 2017.
  11. A. Hashemi, M. Ghasemi, H. Vikalo, and U. Topcu, “Randomized greedy sensor selection: Leveraging weak submodularity,” IEEE Transactions on Automatic Control, vol. 66, no. 1, pp. 199–212, 2021.
  12. N. Harvey, “Concentration for sums of random matrices,” 2011. University of British Columbia.

Summary

We haven't generated a summary for this paper yet.