Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SPANN: Highly-efficient Billion-scale Approximate Nearest Neighbor Search (2111.08566v1)

Published 5 Nov 2021 in cs.DB, cs.AI, cs.CV, cs.IR, and cs.LG

Abstract: The in-memory algorithms for approximate nearest neighbor search (ANNS) have achieved great success for fast high-recall search, but are extremely expensive when handling very large scale database. Thus, there is an increasing request for the hybrid ANNS solutions with small memory and inexpensive solid-state drive (SSD). In this paper, we present a simple but efficient memory-disk hybrid indexing and search system, named SPANN, that follows the inverted index methodology. It stores the centroid points of the posting lists in the memory and the large posting lists in the disk. We guarantee both disk-access efficiency (low latency) and high recall by effectively reducing the disk-access number and retrieving high-quality posting lists. In the index-building stage, we adopt a hierarchical balanced clustering algorithm to balance the length of posting lists and augment the posting list by adding the points in the closure of the corresponding clusters. In the search stage, we use a query-aware scheme to dynamically prune the access of unnecessary posting lists. Experiment results demonstrate that SPANN is 2$\times$ faster than the state-of-the-art ANNS solution DiskANN to reach the same recall quality $90\%$ with same memory cost in three billion-scale datasets. It can reach $90\%$ recall@1 and recall@10 in just around one millisecond with only 32GB memory cost. Code is available at: {\footnotesize\color{blue}{\url{https://github.com/microsoft/SPTAG}}}.

Citations (33)

Summary

  • The paper proposes a novel hybrid indexing method that combines in-memory centroids and disk-stored posting lists for efficient billion-scale search.
  • It employs hierarchical balanced clustering and posting list expansion to ensure balanced list lengths and enhance recall.
  • Query-aware dynamic pruning optimizes search speed, achieving 90% recall in approximately one millisecond on large datasets.

Overview of SPANN: Highly-efficient Billion-scale Approximate Nearest Neighbor Search

The paper "SPANN: Highly-efficient Billion-scale Approximate Nearest Neighbor Search" introduces an innovative approach to Approximate Nearest Neighbor Search (ANNS) designed to efficiently handle large-scale databases while minimizing memory and computational overhead. The system, SPANN, relies on a hybrid memory-disk indexing approach, utilizing both RAM and SSD storage to achieve high recall rates with reduced latency.

Core Contributions

SPANN proposes a memory-efficient inverted index methodology. The system stores centroid points of posting lists in memory and places the posting lists themselves on the disk. This methodology ensures balanced disk-access efficiency and high recall by limiting disk accesses and retrieving high-quality posting lists.

Key Techniques

  1. Hierarchical Balanced Clustering: This method aims to ensure that posting lists have balanced lengths. By partitioning the dataset into a hierarchical structure of clusters, SPANN ensures reduced variance in posting list lengths, which is crucial for handling large-scale datasets given RAM constraints.
  2. Posting List Expansion: SPANN expands the content of posting lists by adding boundary points from clusters, enhancing the recall probability for vectors on the boundary of the list. This is crucial in overcoming the challenge of missing relevant vectors due to partial search.
  3. Query-aware Dynamic Pruning: This technique allows for the dynamic adjustment of the number of posting lists to be searched based on the query's specific needs. It reduces unnecessary posting list accesses, thereby optimizing both recall and latency.

Performance Evaluation

The efficacy of SPANN was evaluated against state-of-the-art billion-scale ANNS solutions like DiskANN across datasets such as SIFT1B, SPACEV1B, and DEEP1B. SPANN demonstrated a remarkable improvement in speed, achieving a 90% recall within approximately one millisecond.

The system also outperformed competitors in terms of VQ (Vector-Query) capacity, suggesting a more efficient use of resources relative to memory consumption. Ablation studies conducted showed the effectiveness of hierarchical clustering and query-aware pruning, highlighting their critical roles in the system's performance.

Implications and Future Directions

SPANN offers significant advancements in efficiently managing large-scale vector searches with minimal memory costs. The reduced computational burden makes it suitable for applications in web search and multimedia data retrieval where large datasets and rapid response times are essential.

Potential future work includes exploring GPU optimizations, as preliminary experiments indicate substantial speed enhancements during the index build phase. The scalability of SPANN in distributed environments further emphasizes its practical applicability for industry-scale deployments, suggesting a robust framework for extending ANN solutions into even larger datasets.

In conclusion, SPANN presents a compelling approach to tackling the challenges posed by billion-scale vector searches, making it a notable contribution to the field of ANNS. The integration of novel indexing strategies and effective resource management positions SPANN as a competitive solution in the domain of large-scale data retrieval.

Youtube Logo Streamline Icon: https://streamlinehq.com