Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Error Scaling Laws for Linear Optimal Estimation from Relative Measurements (0904.2863v1)

Published 18 Apr 2009 in cs.IT and math.IT

Abstract: We study the problem of estimating vector-valued variables from noisy "relative" measurements. This problem arises in several sensor network applications. The measurement model can be expressed in terms of a graph, whose nodes correspond to the variables and edges to noisy measurements of the difference between two variables. We take an arbitrary variable as the reference and consider the optimal (minimum variance) linear unbiased estimate of the remaining variables. We investigate how the error in the optimal linear unbiased estimate of a node variable grows with the distance of the node to the reference node. We establish a classification of graphs, namely, dense or sparse in Rd,1<= d <=3, that determines how the linear unbiased optimal estimation error of a node grows with its distance from the reference node. In particular, if a graph is dense in 1,2, or 3D, then a node variable's estimation error is upper bounded by a linear, logarithmic, or bounded function of distance from the reference, respectively. Corresponding lower bounds are obtained if the graph is sparse in 1, 2 and 3D. Our results also show that naive measures of graph density, such as node degree, are inadequate predictors of the estimation error. Being true for the optimal linear unbiased estimate, these scaling laws determine algorithm-independent limits on the estimation accuracy achievable in large graphs.

Citations (46)

Summary

We haven't generated a summary for this paper yet.