Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures (2008.00297v2)

Published 1 Aug 2020 in cs.CR, cs.DB, cs.DS, and cs.LG

Abstract: The concept of learned index structures relies on the idea that the input-output functionality of a database index can be viewed as a prediction task and, thus, be implemented using a machine learning model instead of traditional algorithmic techniques. This novel angle for a decades-old problem has inspired numerous exciting results in the intersection of machine learning and data structures. However, the main advantage of learned index structures, i.e., the ability to adjust to the data at hand via the underlying ML-model, can become a disadvantage from a security perspective as it could be exploited. In this work, we present the first study of poisoning attacks on learned index structures. The required poisoning approach is different from all previous works since the model under attack is trained on a cumulative distribution function (CDF) and, thus, every injection on the training set has a cascading impact on multiple data values. We formulate the first poisoning attacks on linear regression models trained on the CDF, which is a basic building block of the proposed learned index structures. We generalize our poisoning techniques to attack a more advanced two-stage design of learned index structures called recursive model index (RMI), which has been shown to outperform traditional B-Trees. We evaluate our attacks on real-world and synthetic datasets under a wide variety of parameterizations of the model and show that the error of the RMI increases up to $300\times$ and the error of its second-stage models increases up to $3000\times$.

Citations (15)

Summary

We haven't generated a summary for this paper yet.