Papers
Topics
Authors
Recent
Search
2000 character limit reached

Structure-Sensitive Graph Dictionary Embedding for Graph Classification

Published 18 Jun 2023 in cs.LG and cs.CV | (2306.10505v1)

Abstract: Graph structure expression plays a vital role in distinguishing various graphs. In this work, we propose a Structure-Sensitive Graph Dictionary Embedding (SS-GDE) framework to transform input graphs into the embedding space of a graph dictionary for the graph classification task. Instead of a plain use of a base graph dictionary, we propose the variational graph dictionary adaptation (VGDA) to generate a personalized dictionary (named adapted graph dictionary) for catering to each input graph. In particular, for the adaptation, the Bernoulli sampling is introduced to adjust substructures of base graph keys according to each input, which increases the expression capacity of the base dictionary tremendously. To make cross-graph measurement sensitive as well as stable, multi-sensitivity Wasserstein encoding is proposed to produce the embeddings by designing multi-scale attention on optimal transport. To optimize the framework, we introduce mutual information as the objective, which further deduces to variational inference of the adapted graph dictionary. We perform our SS-GDE on multiple datasets of graph classification, and the experimental results demonstrate the effectiveness and superiority over the state-of-the-art methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams, “Convolutional networks on graphs for learning molecular fingerprints,” Advances in neural information processing systems, vol. 28, 2015.
  2. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
  3. K. M. Borgwardt and H.-P. Kriegel, “Shortest-path kernels on graphs,” in Fifth IEEE international conference on data mining (ICDM’05).   IEEE, 2005, pp. 8–pp.
  4. N. Shervashidze, S. Vishwanathan, T. Petri, K. Mehlhorn, and K. Borgwardt, “Efficient graphlet kernels for large graph comparison,” in Artificial intelligence and statistics.   PMLR, 2009, pp. 488–495.
  5. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  6. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  7. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” arXiv preprint arXiv:1810.00826, 2018.
  8. T. Zhang, Y. Wang, Z. Cui, C. Zhou, B. Cui, H. Huang, and J. Yang, “Deep wasserstein graph discriminant learning for graph classification,” in AAAI, 2021.
  9. C. Vincent-Cuaz, T. Vayer, R. Flamary, M. Corneli, and N. Courty, “Online graph dictionary learning,” in International Conference on Machine Learning.   PMLR, 2021, pp. 10 564–10 574.
  10. T. Gärtner, P. Flach, and S. Wrobel, “On graph kernels: Hardness results and efficient alternatives,” in Learning theory and kernel machines.   Springer, 2003, pp. 129–143.
  11. N. Shervashidze, P. Schweitzer, E. J. Van Leeuwen, K. Mehlhorn, and K. M. Borgwardt, “Weisfeiler-lehman graph kernels.” Journal of Machine Learning Research, vol. 12, no. 9, 2011.
  12. Z. Luo, L. Liu, J. Yin, Y. Li, and Z. Wu, “Deep learning of graphs with ngram convolutional neural networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 10, pp. 2125–2139, 2017.
  13. M. Niepert, M. Ahmed, and K. Kutzkov, “Learning convolutional neural networks for graphs,” in International conference on machine learning.   PMLR, 2016, pp. 2014–2023.
  14. J. Lee, I. Lee, and J. Kang, “Self-attention graph pooling,” in International conference on machine learning.   PMLR, 2019, pp. 3734–3743.
  15. M. Li, S. Chen, Y. Zhang, and I. Tsang, “Graph cross networks with vertex infomax pooling,” Advances in Neural Information Processing Systems, vol. 33, pp. 14 093–14 105, 2020.
  16. A. Nouranizadeh, M. Matinkia, M. Rahmati, and R. Safabakhsh, “Maximum entropy weighted independent set pooling for graph neural networks,” arXiv preprint arXiv:2107.01410, 2021.
  17. Z. Ying, J. You, C. Morris, X. Ren, W. Hamilton, and J. Leskovec, “Hierarchical graph representation learning with differentiable pooling,” Advances in neural information processing systems, vol. 31, 2018.
  18. H. Yuan and S. Ji, “Structpool: Structured graph pooling via conditional random fields,” in Proceedings of the 8th International Conference on Learning Representations, 2020.
  19. J. Baek, M. Kang, and S. J. Hwang, “Accurate learning of graph representations with graph multiset pooling,” in International Conference on Learning Representations (ICLR), 2021.
  20. J. Li, Y. Rong, H. Cheng, H. Meng, W. Huang, and J. Huang, “Semi-supervised graph classification: A hierarchical graph perspective,” in The World Wide Web Conference, 2019, pp. 972–982.
  21. G. Bécigneul, O.-E. Ganea, B. Chen, R. Barzilay, and T. S. Jaakkola, “Optimal transport graph neural networks,” 2020.
  22. L. Chen, Z. Gan, Y. Cheng, L. Li, L. Carin, and J. Liu, “Graph optimal transport for cross-domain alignment,” in International Conference on Machine Learning.   PMLR, 2020, pp. 1542–1553.
  23. C. Frogner, C. Zhang, H. Mobahi, M. Araya, and T. A. Poggio, “Learning with a wasserstein loss,” Advances in neural information processing systems, vol. 28, 2015.
  24. V. Titouan, N. Courty, R. Tavenard, and R. Flamary, “Optimal transport for structured data with application on graphs,” in International Conference on Machine Learning.   PMLR, 2019, pp. 6275–6284.
  25. C. Vincent-Cuaz, R. Flamary, M. Corneli, T. Vayer, and N. Courty, “Template based graph neural network with optimal transport distances,” arXiv preprint arXiv:2205.15733, 2022.
  26. M. Togninalli, E. Ghisu, F. Llinares-López, B. Rieck, and K. Borgwardt, “Wasserstein weisfeiler-lehman graph kernels,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  27. A. Rolet, M. Cuturi, and G. Peyré, “Fast dictionary learning with a smoothed wasserstein loss,” in Artificial Intelligence and Statistics.   PMLR, 2016, pp. 630–638.
  28. M. A. Schmitz, M. Heitz, N. Bonneel, F. Ngole, D. Coeurjolly, M. Cuturi, G. Peyré, and J.-L. Starck, “Wasserstein dictionary learning: Optimal transport-based unsupervised nonlinear dictionary learning,” SIAM Journal on Imaging Sciences, vol. 11, no. 1, pp. 643–678, 2018.
  29. E. Dai and S. Wang, “Towards self-explainable graph neural network,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 302–311.
  30. W. Lin, H. Lan, H. Wang, and B. Li, “Orphicx: A causality-inspired latent variable model for interpreting graph neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 729–13 738.
  31. D. Luo, W. Cheng, D. Xu, W. Yu, B. Zong, H. Chen, and X. Zhang, “Parameterized explainer for graph neural network,” Advances in neural information processing systems, vol. 33, pp. 19 620–19 631, 2020.
  32. S. Miao, M. Liu, and P. Li, “Interpretable and generalizable graph learning via stochastic attention mechanism,” in International Conference on Machine Learning.   PMLR, 2022, pp. 15 524–15 543.
  33. Y.-X. Wu, X. Wang, A. Zhang, X. He, and T.-S. Chua, “Discovering invariant rationales for graph neural networks,” arXiv preprint arXiv:2201.12872, 2022.
  34. Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gnnexplainer: Generating explanations for graph neural networks,” Advances in neural information processing systems, vol. 32, 2019.
  35. J. Yu, J. Cao, and R. He, “Improving subgraph recognition with variational graph information bottleneck,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19 396–19 405.
  36. J. Yu, T. Xu, Y. Rong, Y. Bian, J. Huang, and R. He, “Graph information bottleneck for subgraph recognition,” arXiv preprint arXiv:2010.05563, 2020.
  37. M. Cuturi, “Sinkhorn distances: Lightspeed computation of optimal transport,” Advances in neural information processing systems, vol. 26, 2013.
  38. A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch, “Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity,” Journal of medicinal chemistry, vol. 34, no. 2, pp. 786–797, 1991.
  39. C. Helma, R. D. King, S. Kramer, and A. Srinivasan, “The predictive toxicology challenge 2000–2001,” Bioinformatics, vol. 17, no. 1, pp. 107–108, 2001.
  40. K. M. Borgwardt, C. S. Ong, S. Schönauer, S. Vishwanathan, A. J. Smola, and H.-P. Kriegel, “Protein function prediction via graph kernels,” Bioinformatics, vol. 21, no. suppl_1, pp. i47–i56, 2005.
  41. P. Yanardag and S. Vishwanathan, “Deep graph kernels,” in Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 2015, pp. 1365–1374.
  42. S. S. Du, K. Hou, R. R. Salakhutdinov, B. Poczos, R. Wang, and K. Xu, “Graph neural tangent kernel: Fusing graph neural networks with graph kernels,” Advances in neural information processing systems, vol. 32, 2019.
  43. H. Maron, H. Ben-Hamu, H. Serviansky, and Y. Lipman, “Provably powerful graph networks,” Advances in neural information processing systems, vol. 32, 2019.
  44. D. Q. Nguyen, T. D. Nguyen, and D. Phung, “Universal graph transformer self-attention networks,” in Companion Proceedings of the Web Conference 2022, 2022, pp. 193–196.
  45. Y. Zhu, K. Zhang, J. Wang, H. Ling, J. Zhang, and H. Zha, “Structural landmarking and interaction modelling: A ”slim” network for graph classification,” in Proceedings of the Thirty-Sixth Conference on Association for the Advancement of Artificial Intelligence (AAAI), 2022, pp. 9251–9259.
  46. Y. Sui, X. Wang, J. Wu, M. Lin, X. He, and T.-S. Chua, “Causal attention for interpretable and generalizable graph classification,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022, pp. 1696–1705.
  47. J. Wu, S. Li, J. Li, Y. Pan, and K. Xu, “A simple yet effective method for graph classification,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI), 2022, pp. 3580–3586.
  48. H. Yue, C. Zhang, C. Zhang, and H. Liu, “Label-invariant augmentation for semi-supervised graph classification,” arXiv preprint arXiv:2205.09802, 2022.
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.