Papers
Topics
Authors
Recent
2000 character limit reached

DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs

Published 17 Jun 2021 in cs.SI | (2106.09501v2)

Abstract: With the rapid development of artificial intelligence, a number of machine learning algorithms, such as graph neural networks have been proposed to facilitate network analysis or graph data mining. Although effective, recent studies show that these advanced methods may suffer from adversarial attacks, i.e., they may lose effectiveness when only a small fraction of links are unexpectedly changed. This paper investigates three well-known adversarial attack methods, i.e., Nettack, Meta Attack, and GradArgmax. It is found that different attack methods have their specific attack preferences on changing the target network structures. Such attack pattern are further verified by experimental results on some real-world networks, revealing that generally the top four most important network attributes on detecting adversarial samples suffice to explain the preference of an attack method. Based on these findings, the network attributes are utilized to design machine learning models for adversarial sample detection and attack method recognition with outstanding performance.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.