Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on Efficient Processing of Similarity Queries over Neural Embeddings (2204.07922v1)

Published 17 Apr 2022 in cs.DB and cs.IR

Abstract: Similarity query is the family of queries based on some similarity metrics. Unlike the traditional database queries which are mostly based on value equality, similarity queries aim to find targets "similar enough to" the given data objects, depending on some similarity metric, e.g., Euclidean distance, cosine similarity and so on. To measure the similarity between data objects, traditional methods normally work on low level or syntax features(e.g., basic visual features on images or bag-of-word features of text), which makes them weak to compute the semantic similarities between objects. So for measuring data similarities semantically, neural embedding is applied. Embedding techniques work by representing the raw data objects as vectors (so called "embeddings" or "neural embeddings" since they are mostly generated by neural network models) that expose the hidden semantics of the raw data, based on which embeddings do show outstanding effectiveness on capturing data similarities, making it one of the most widely used and studied techniques in the state-of-the-art similarity query processing research. But there are still many open challenges on the efficiency of embedding based similarity query processing, which are not so well-studied as the effectiveness. In this survey, we first provide an overview of the "similarity query" and "similarity query processing" problems. Then we talk about recent approaches on designing the indexes and operators for highly efficient similarity query processing on top of embeddings (or more generally, high dimensional data). Finally, we investigate the specific solutions with and without using embeddings in selected application domains of similarity queries, including entity resolution and information retrieval. By comparing the solutions, we show how neural embeddings benefit those applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Yifan Wang (321 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.