Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
89 tokens/sec
Gemini 2.5 Pro Premium
50 tokens/sec
GPT-5 Medium
29 tokens/sec
GPT-5 High Premium
28 tokens/sec
GPT-4o
90 tokens/sec
DeepSeek R1 via Azure Premium
55 tokens/sec
GPT OSS 120B via Groq Premium
468 tokens/sec
Kimi K2 via Groq Premium
207 tokens/sec
2000 character limit reached

Linear models based on noisy data and the Frisch scheme (1304.3877v1)

Published 14 Apr 2013 in cs.SY, math.OC, math.ST, and stat.TH

Abstract: We address the problem of identifying linear relations among variables based on noisy measurements. This is, of course, a central question in problems involving "Big Data." Often a key assumption is that measurement errors in each variable are independent. This precise formulation has its roots in the work of Charles Spearman in 1904 and of Ragnar Frisch in the 1930's. Various topics such as errors-in-variables, factor analysis, and instrumental variables, all refer to alternative formulations of the problem of how to account for the anticipated way that noise enters in the data. In the present paper we begin by describing the basic theory and provide alternative modern proofs to some key results. We then go on to consider certain generalizations of the theory as well applying certain novel numerical techniques to the problem. A central role is played by the Frisch-Kalman dictum which aims at a noise contribution that allows a maximal set of simultaneous linear relations among the noise-free variables --a rank minimization problem. In the years since Frisch's original formulation, there have been several insights including trace minimization as a convenient heuristic to replace rank minimization. We discuss convex relaxations and certificates guaranteeing global optimality. A complementary point of view to the Frisch-Kalman dictum is introduced in which models lead to a min-max quadratic estimation error for the error-free variables. Points of contact between the two formalisms are discussed and various alternative regularization schemes are indicated.

Citations (26)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube