Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GROTESQUE: Noisy Group Testing (Quick and Efficient) (1307.2811v1)

Published 10 Jul 2013 in cs.IT and math.IT

Abstract: Group-testing refers to the problem of identifying (with high probability) a (small) subset of $D$ defectives from a (large) set of $N$ items via a "small" number of "pooled" tests. For ease of presentation in this work we focus on the regime when $D = \cO{N{1-\gap}}$ for some $\gap > 0$. The tests may be noiseless or noisy, and the testing procedure may be adaptive (the pool defining a test may depend on the outcome of a previous test), or non-adaptive (each test is performed independent of the outcome of other tests). A rich body of literature demonstrates that $\Theta(D\log(N))$ tests are information-theoretically necessary and sufficient for the group-testing problem, and provides algorithms that achieve this performance. However, it is only recently that reconstruction algorithms with computational complexity that is sub-linear in $N$ have started being investigated (recent work by \cite{GurI:04,IndN:10, NgoP:11} gave some of the first such algorithms). In the scenario with adaptive tests with noisy outcomes, we present the first scheme that is simultaneously order-optimal (up to small constant factors) in both the number of tests and the decoding complexity ($\cO{D\log(N)}$ in both the performance metrics). The total number of stages of our adaptive algorithm is "small" ($\cO{\log(D)}$). Similarly, in the scenario with non-adaptive tests with noisy outcomes, we present the first scheme that is simultaneously near-optimal in both the number of tests and the decoding complexity (via an algorithm that requires $\cO{D\log(D)\log(N)}$ tests and has a decoding complexity of {${\cal O}(D(\log N+\log{2}D))$}. Finally, we present an adaptive algorithm that only requires 2 stages, and for which both the number of tests and the decoding complexity scale as {${\cal O}(D(\log N+\log{2}D))$}. For all three settings the probability of error of our algorithms scales as $\cO{1/(poly(D)}$.

Citations (38)

Summary

We haven't generated a summary for this paper yet.