Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets (2312.06568v1)

Published 11 Dec 2023 in cs.LG, cs.AI, and cs.CR

Abstract: Graph Lottery Tickets (GLTs), comprising a sparse adjacency matrix and a sparse graph neural network (GNN), can significantly reduce the inference latency and compute footprint compared to their dense counterparts. Despite these benefits, their performance against adversarial structure perturbations remains to be fully explored. In this work, we first investigate the resilience of GLTs against different structure perturbation attacks and observe that they are highly vulnerable and show a large drop in classification accuracy. Based on this observation, we then present an adversarially robust graph sparsification (ARGS) framework that prunes the adjacency matrix and the GNN weights by optimizing a novel loss function capturing the graph homophily property and information associated with both the true labels of the train nodes and the pseudo labels of the test nodes. By iteratively applying ARGS to prune both the perturbed graph adjacency matrix and the GNN model weights, we can find adversarially robust graph lottery tickets that are highly sparse yet achieve competitive performance under different untargeted training-time structure attacks. Evaluations conducted on various benchmarks, considering different poisoning structure attacks, namely, PGD, MetaAttack, Meta-PGD, and PR-BCD demonstrate that the GLTs generated by ARGS can significantly improve the robustness, even when subjected to high levels of sparsity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Adversarial attacks on node embeddings via graph poisoning. In International Conference on Machine Learning, pages 695–704. PMLR, 2019.
  2. A restricted black-box adversarial framework towards attacking graph embedding models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3389–3396, 2020.
  3. A unified lottery ticket hypothesis for graph neural networks. In International Conference on Machine Learning, pages 1695–1706. PMLR, 2021.
  4. Adversarial attack on graph structured data. In International conference on machine learning, pages 1115–1124. PMLR, 2018.
  5. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 169–177, 2020.
  6. Graph random neural networks for semi-supervised learning on graphs. Advances in neural information processing systems, 33:22092–22103, 2020.
  7. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635, 2018.
  8. Robustness of graph neural networks at scale. Advances in Neural Information Processing Systems, 34:7637–7649, 2021.
  9. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
  10. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020.
  11. Rethinking graph lottery tickets: Graph sparsity matters. arXiv preprint arXiv:2305.02190, 2023.
  12. Adversarial attacks and defenses on graphs: A review and empirical study. arXiv preprint arXiv:2003.00653, 10(3447556.3447566), 2020.
  13. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 66–74, 2020.
  14. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  15. Deepergcn: All you need to train deeper gcns. arXiv preprint arXiv:2006.07739, 2020.
  16. Reliable representations make a stronger defender: Unsupervised structure refinement for robust gnn. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 925–935, 2022.
  17. Revisiting graph adversarial attack and defense from a data distribution perspective. In The Eleventh International Conference on Learning Representations, 2023.
  18. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  19. Deeprobust: A pytorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149, 2020.
  20. A unified framework for data poisoning attack to graph-based semi-supervised learning. arXiv preprint arXiv:1910.14147, 2019.
  21. Automating the construction of internet portals with machine learning. Information Retrieval, 3:127–163, 2000.
  22. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415–444, 2001.
  23. Are defenses for graph neural networks robust? Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.
  24. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  25. Transferring robustness for graph neural network against poisoning attacks. In Proceedings of the 13th international conference on web search and data mining, pages 600–608, 2020.
  26. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  27. Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610, 2019.
  28. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018.
  29. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018.
  30. Gnnguard: Defending graph neural networks against adversarial attacks. Advances in neural information processing systems, 33:9263–9275, 2020.
  31. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering, 34(1):249–270, 2020.
  32. Graph neural networks: A review of methods and applications. AI open, 1:57–81, 2020.
  33. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 1399–1407, 2019.
  34. Improving robustness of graph neural networks with heterophily-inspired designs. arXiv preprint arXiv:2106.07767, 3, 2021.
  35. Deep graph structure learning for robust representations: A survey. arXiv preprint arXiv:2103.03036, 14, 2021.
  36. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2847–2856, 2018.
  37. Adversarial attacks on graph neural networks via meta learning. 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Subhajit Dutta Chowdhury (4 papers)
  2. Zhiyu Ni (3 papers)
  3. Qingyuan Peng (1 paper)
  4. Souvik Kundu (76 papers)
  5. Pierluigi Nuzzo (33 papers)
Citations (2)