Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revealing Unobservables by Deep Learning: Generative Element Extraction Networks (GEEN) (2210.01300v1)

Published 4 Oct 2022 in stat.ML, cs.LG, and econ.EM

Abstract: Latent variable models are crucial in scientific research, where a key variable, such as effort, ability, and belief, is unobserved in the sample but needs to be identified. This paper proposes a novel method for estimating realizations of a latent variable $X*$ in a random sample that contains its multiple measurements. With the key assumption that the measurements are independent conditional on $X*$, we provide sufficient conditions under which realizations of $X*$ in the sample are locally unique in a class of deviations, which allows us to identify realizations of $X*$. To the best of our knowledge, this paper is the first to provide such identification in observation. We then use the Kullback-Leibler distance between the two probability densities with and without the conditional independence as the loss function to train a Generative Element Extraction Networks (GEEN) that maps from the observed measurements to realizations of $X*$ in the sample. The simulation results imply that this proposed estimator works quite well and the estimated values are highly correlated with realizations of $X*$. Our estimator can be applied to a large class of latent variable models and we expect it will change how people deal with latent variables.

Citations (1)

Summary

We haven't generated a summary for this paper yet.