Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Membership Inference Attacks on Knowledge Graphs (2104.08273v2)

Published 16 Apr 2021 in cs.AI and cs.CL

Abstract: Membership inference attacks (MIAs) infer whether a specific data record is used for target model training. MIAs have provoked many discussions in the information security community since they give rise to severe data privacy issues, especially for private and sensitive datasets. Knowledge Graphs (KGs), which describe domain-specific subjects and relationships among them, are valuable and sensitive, such as medical KGs constructed from electronic health records. However, the privacy threat to knowledge graphs is critical but rarely explored. In this paper, we conduct the first empirical evaluation of privacy threats to knowledge graphs triggered by knowledge graph embedding methods (KGEs). We propose three types of membership inference attacks: transfer attacks (TAs), prediction loss-based attacks (PLAs), and prediction correctness-based attacks (PCAs), according to attack difficulty levels. In the experiments, we conduct three inference attacks against four standard KGE methods over three benchmark datasets. In addition, we also propose the attacks against medical KG and financial KG. The results demonstrate that the proposed attack methods can easily explore the privacy leakage of knowledge graphs.

Citations (13)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com