Near neighbor preserving dimension reduction for doubling subsets of $\ell_1$ (1902.08815v2)
Abstract: Randomized dimensionality reduction has been recognized as one of the fundamental techniques in handling high-dimensional data. Starting with the celebrated Johnson-Lindenstrauss Lemma, such reductions have been studied in depth for the Euclidean $(\ell_2)$ metric, but much less for the Manhattan $(\ell_1)$ metric. Our primary motivation is the approximate nearest neighbor problem in $\ell_1$. We exploit its reduction to the decision-with-witness version, called approximate \textit{near} neighbor, which incurs a roughly logarithmic overhead. In 2007, Indyk and Naor, in the context of approximate nearest neighbors, introduced the notion of nearest neighbor-preserving embeddings. These are randomized embeddings between two metric spaces with guaranteed bounded distortion only for the distances between a query point and a point set. Such embeddings are known to exist for both $\ell_2$ and $\ell_1$ metrics, as well as for doubling subsets of $\ell_2$. The case that remained open were doubling subsets of $\ell_1$. In this paper, we propose a dimension reduction by means of a \textit{near} neighbor-preserving embedding for doubling subsets of $\ell_1$. Our approach is to represent the pointset with a carefully chosen covering set, then randomly project the latter. We study two types of covering sets: $c$-approximate $r$-nets and randomly shifted grids, and we discuss the tradeoff between them in terms of preprocessing time and target dimension. We employ Cauchy variables: certain concentration bounds derived should be of independent interest.