Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Per-bucket concurrent rehashing algorithms (1509.02235v1)

Published 8 Sep 2015 in cs.DS

Abstract: This paper describes a generic algorithm for concurrent resizing and on-demand per-bucket rehashing for an extensible hash table. In contrast to known lock-based hash table algorithms, the proposed algorithm separates the resizing and rehashing stages so that they neither invalidate existing buckets nor block any concurrent operations. Instead, the rehashing work is deferred and split across subsequent operations with the table. The rehashing operation uses bucket-level synchronization only and therefore allows a race condition between lookup and moving operations running in different threads. Instead of using explicit synchronization, the algorithm detects the race condition and restarts the lookup operation. In comparison with other lock-based algorithms, the proposed algorithm reduces high-level synchronization on the hot path, improving performance, concurrency, and scalability of the table. The response time of the operations is also more predictable. The algorithm is compatible with cache friendly data layouts for buckets and does not depend on any memory reclamation techniques thus potentially achieving additional performance gain with corresponding implementations.

Citations (2)

Summary

We haven't generated a summary for this paper yet.