Papers
Topics
Authors
Recent
Search
2000 character limit reached

Membership Inference Attacks on Knowledge Graphs

Published 16 Apr 2021 in cs.AI and cs.CL | (2104.08273v2)

Abstract: Membership inference attacks (MIAs) infer whether a specific data record is used for target model training. MIAs have provoked many discussions in the information security community since they give rise to severe data privacy issues, especially for private and sensitive datasets. Knowledge Graphs (KGs), which describe domain-specific subjects and relationships among them, are valuable and sensitive, such as medical KGs constructed from electronic health records. However, the privacy threat to knowledge graphs is critical but rarely explored. In this paper, we conduct the first empirical evaluation of privacy threats to knowledge graphs triggered by knowledge graph embedding methods (KGEs). We propose three types of membership inference attacks: transfer attacks (TAs), prediction loss-based attacks (PLAs), and prediction correctness-based attacks (PCAs), according to attack difficulty levels. In the experiments, we conduct three inference attacks against four standard KGE methods over three benchmark datasets. In addition, we also propose the attacks against medical KG and financial KG. The results demonstrate that the proposed attack methods can easily explore the privacy leakage of knowledge graphs.

Citations (13)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.