Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Stealing Attacks Against Inductive Graph Neural Networks (2112.08331v1)

Published 15 Dec 2021 in cs.CR and cs.LG

Abstract: Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of ML models, have been proposed to fully leverage graph data to build powerful applications. In particular, the inductive GNNs, which can generalize to unseen data, become mainstream in this direction. Machine learning models have shown great potential in various tasks and have been deployed in many real-world scenarios. To train a good model, a large amount of data as well as computational resources are needed, leading to valuable intellectual property. Previous research has shown that ML models are prone to model stealing attacks, which aim to steal the functionality of the target models. However, most of them focus on the models trained with images and texts. On the other hand, little attention has been paid to models trained with graph data, i.e., GNNs. In this paper, we fill the gap by proposing the first model stealing attacks against inductive GNNs. We systematically define the threat model and propose six attacks based on the adversary's background knowledge and the responses of the target models. Our evaluation on six benchmark datasets shows that the proposed model stealing attacks against GNNs achieve promising performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yun Shen (61 papers)
  2. Xinlei He (58 papers)
  3. Yufei Han (26 papers)
  4. Yang Zhang (1129 papers)
Citations (53)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub