Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Noise-Resilient Group Testing: Limitations and Constructions (0811.2609v3)

Published 17 Nov 2008 in cs.DM, cs.IT, math.CO, and math.IT

Abstract: We study combinatorial group testing schemes for learning $d$-sparse Boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of $\tilde{\Omega}(d2 \log n)$ that is known for exact reconstruction of $d$-sparse vectors of length $n$ via non-adaptive measurements, by a multiplicative factor $\tilde{\Omega}(d)$. Specifically, we give simple randomized constructions of non-adaptive measurement schemes, with $m=O(d \log n)$ measurements, that allow efficient reconstruction of $d$-sparse vectors up to $O(d)$ false positives even in the presence of $\delta m$ false positives and $O(m/d)$ false negatives within the measurement outcomes, for any constant $\delta < 1$. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized trade-off but using $m = O(d{1+o(1)} \log n)$ measurements. We also obtain explicit constructions that allow fast reconstruction in time $\poly(m)$, which would be sublinear in $n$ for sufficiently sparse vectors. The main tool used in our construction is the list-decoding view of randomness condensers and extractors.

Citations (95)

Summary

We haven't generated a summary for this paper yet.