Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Matching Observations to Distributions: Efficient Estimation via Sparsified Hungarian Algorithm (1806.06766v2)

Published 18 Jun 2018 in cs.DS and cs.SY

Abstract: Suppose we are given observations, where each observation is drawn independently from one of $k$ known distributions. The goal is to match each observation to the distribution from which it was drawn. We observe that the maximum likelihood estimator (MLE) for this problem can be computed using weighted bipartite matching, even when $n$, the number of observations per distribution, exceeds one. This is achieved by instantiating $n$ duplicates of each distribution node. However, in the regime where the number of observations per distribution is much larger than the number of distributions, the Hungarian matching algorithm for computing the weighted bipartite matching requires $\mathcal O(n3)$ time. We introduce a novel randomized matching algorithm that reduces the runtime to $\tilde{\mathcal O}(n2)$ by sparsifying the original graph, returning the exact MLE with high probability. Next, we give statistical justification for using the MLE by bounding the excess risk of the MLE, where the loss is defined as the negative log-likelihood. We test these bounds for the case of isotropic Gaussians with equal covariances and whose means are separated by a distance $\eta$, and find (1) that $\gg \log k$ separation suffices to drive the proportion of mismatches of the MLE to 0, and (2) that the expected fraction of mismatched observations goes to zero at rate $\mathcal O({(\log k)}2/\eta2)$.

Summary

We haven't generated a summary for this paper yet.