Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attack-Resilient Weighted $\ell_1$ Observer with Prior Pruning (2104.03580v1)

Published 8 Apr 2021 in math.OC, cs.SY, and eess.SY

Abstract: Security related questions for Cyber Physical Systems (CPS) have attracted much research attention in searching for novel methods for attack-resilient control and/or estimation. Specifically, false data injection attacks (FDIAs) have been shown to be capable of bypassing bad data detection (BDD), while arbitrarily compromising the integrity of state estimators and robust controller even with very sparse measurements corruption. Moreover, based on the inherent sparsity of pragmatic attack signals, $\ell_1$-minimization scheme has been used extensively to improve the design of attack-resilient estimators. For this, the theoretical maximum for the percentage of compromised nodes that can be accommodated has been shown to be $50\%$. In order to guarantee correct state recoveries for larger percentage of attacked nodes, researchers have begun to incorporate prior information into the underlying resilient observer design framework. For the most pragmatic cases, this prior information is often obtained through some data-driven machine learning process. Existing results have shown strong positive correlation between the tolerated attack percentages and the precision of the prior information. In this paper, we present a pruning method to improve the precision of the prior information, given corresponding stochastic uncertainty characteristics of the underlying machine learning model. Then a weighted $\ell_1$-minimization is proposed based on the pruned prior. The theoretical and simulation results show that the pruning method significantly improves the observer performance for much larger attack percentages, even when moderately accurate machine learning model used.

Summary

We haven't generated a summary for this paper yet.