Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Indexability: The Query-Update Tradeoff for One-Dimensional Range Queries (0811.4346v1)

Published 26 Nov 2008 in cs.DS and cs.DB

Abstract: The B-tree is a fundamental secondary index structure that is widely used for answering one-dimensional range reporting queries. Given a set of $N$ keys, a range query can be answered in $O(\log_B \nm + \frac{K}{B})$ I/Os, where $B$ is the disk block size, $K$ the output size, and $M$ the size of the main memory buffer. When keys are inserted or deleted, the B-tree is updated in $O(\log_B N)$ I/Os, if we require the resulting changes to be committed to disk right away. Otherwise, the memory buffer can be used to buffer the recent updates, and changes can be written to disk in batches, which significantly lowers the amortized update cost. A systematic way of batching up updates is to use the logarithmic method, combined with fractional cascading, resulting in a dynamic B-tree that supports insertions in $O(\frac{1}{B}\log\nm)$ I/Os and queries in $O(\log\nm + \frac{K}{B})$ I/Os. Such bounds have also been matched by several known dynamic B-tree variants in the database literature. In this paper, we prove that for any dynamic one-dimensional range query index structure with query cost $O(q+\frac{K}{B})$ and amortized insertion cost $O(u/B)$, the tradeoff $q\cdot \log(u/q) = \Omega(\log B)$ must hold if $q=O(\log B)$. For most reasonable values of the parameters, we have $\nm = B{O(1)}$, in which case our query-insertion tradeoff implies that the bounds mentioned above are already optimal. Our lower bounds hold in a dynamic version of the {\em indexability model}, which is of independent interests.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Ke Yi (37 papers)

Summary

We haven't generated a summary for this paper yet.