Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Well-Conditioned Linear Minimum Mean Square Error Estimation (2201.02275v4)

Published 6 Jan 2022 in eess.SP, cs.IT, cs.SY, eess.SY, math.IT, and stat.CO

Abstract: Linear minimum mean square error (LMMSE) estimation is often ill-conditioned, suggesting that unconstrained minimization of the mean square error is an inadequate approach to filter design. To address this, we first develop a unifying framework for studying constrained LMMSE estimation problems. Using this framework, we explore an important structural property of constrained LMMSE filters involving a certain prefilter. Optimality is invariant under invertible linear transformations of the prefilter. This parameterizes all optimal filters by equivalence classes of prefilters. We then clarify that merely constraining the rank of the filter does not suitably address the problem of ill-conditioning. Instead, we adopt a constraint that explicitly requires solutions to be well-conditioned in a certain specific sense. We introduce two well-conditioned filters and show that they converge to the unconstrained LMMSE filter as their truncation-power loss goes to zero, at the same rate as the low-rank Wiener filter. We also show extensions to the case of weighted trace and determinant of the error covariance as objective functions. Finally, our quantitative results with historical VIX data demonstrate that our two well-conditioned filters have stable performance while the standard LMMSE filter deteriorates with increasing condition number.

Citations (5)

Summary

We haven't generated a summary for this paper yet.