Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk? (2306.00578v1)

Published 1 Jun 2023 in cs.LG, cs.AI, and cs.CR

Abstract: Graph neural networks (GNNs) have shown promising results on real-life datasets and applications, including healthcare, finance, and education. However, recent studies have shown that GNNs are highly vulnerable to attacks such as membership inference attack and link reconstruction attack. Surprisingly, attribute inference attacks has received little attention. In this paper, we initiate the first investigation into attribute inference attack where an attacker aims to infer the sensitive user attributes based on her public or non-sensitive attributes. We ask the question whether black-box attribute inference attack constitutes a significant privacy risk for graph-structured data and their corresponding GNN model. We take a systematic approach to launch the attacks by varying the adversarial knowledge and assumptions. Our findings reveal that when an attacker has black-box access to the target model, GNNs generally do not reveal significantly more information compared to missing value estimation techniques. Code is available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Iyiola E. Olatunji (9 papers)
  2. Anmar Hizber (1 paper)
  3. Oliver Sihlovec (1 paper)
  4. Megha Khosla (35 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.