Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Similarity preserving compressions of high dimensional sparse data (1612.06057v1)

Published 19 Dec 2016 in cs.DS and cs.CC

Abstract: The rise of internet has resulted in an explosion of data consisting of millions of articles, images, songs, and videos. Most of this data is high dimensional and sparse. The need to perform an efficient search for similar objects in such high dimensional big datasets is becoming increasingly common. Even with the rapid growth in computing power, the brute-force search for such a task is impractical and at times impossible. Therefore it is quite natural to investigate the techniques that compress the dimension of the data-set while preserving the similarity between data objects. In this work, we propose an efficient compression scheme mapping binary vectors into binary vectors and simultaneously preserving Hamming distance and Inner Product. The length of our compression depends only on the sparsity and is independent of the dimension of the data. Moreover our schemes provide one-shot solution for Hamming distance and Inner Product, and work in the streaming setting as well. In contrast with the "local projection" strategies used by most of the previous schemes, our scheme combines (using sparsity) the following two strategies: $1.$ Partitioning the dimensions into several buckets, $2.$ Then obtaining "global linear summaries" in each of these buckets. We generalize our scheme for real-valued data and obtain compressions for Euclidean distance, Inner Product, and $k$-way Inner Product.

Citations (4)

Summary

We haven't generated a summary for this paper yet.