Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reproducibility and Geometric Intrinsic Dimensionality: An Investigation on Graph Neural Network Research (2403.08438v2)

Published 13 Mar 2024 in cs.LG and cs.AI

Abstract: Difficulties in replication and reproducibility of empirical evidences in machine learning research have become a prominent topic in recent years. Ensuring that machine learning research results are sound and reliable requires reproducibility, which verifies the reliability of research findings using the same code and data. This promotes open and accessible research, robust experimental workflows, and the rapid integration of new findings. Evaluating the degree to which research publications support these different aspects of reproducibility is one goal of the present work. For this we introduce an ontology of reproducibility in machine learning and apply it to methods for graph neural networks. Building on these efforts we turn towards another critical challenge in machine learning, namely the curse of dimensionality, which poses challenges in data collection, representation, and analysis, making it harder to find representative data and impeding the training and inference processes. Using the closely linked concept of geometric intrinsic dimension we investigate to which extend the used machine learning models are influenced by the intrinsic dimension of the data sets they are trained on.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (120)
  1. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. ArXiv, abs/1905.00067:21–29, 4 2019. URL https://arxiv.org/pdf/1905.00067.
  2. Tracing knowledge in language models back to the training data. ArXiv, abs/2205.11482, 5 2022. URL https://arxiv.org/pdf/2205.11482.
  3. Allen Institute for Artificial Intelligence. Semantic Scholar, 2022. URL https://www.semanticscholar.org/. [Online; accessed 2023-12-11].
  4. A closer look at memorization in deep networks. In Doina Precup and Yee Whye Teh (eds.), International conference on machine learning, volume 70, pp.  233–242. PMLR, JMLR.org, 8 2017. URL https://proceedings.mlr.press/v70/arpit17a/arpit17a.pdf.
  5. ashleve and Contributors. Lightning Hydra Template, 2022. URL https://github.com/ashleve/lightning-hydra-template. [Online; accessed 2023-11-22].
  6. Beyond low-frequency information in graph convolutional networks. In AAAI Conference on Artificial Intelligence, volume 35, pp. 3950–3957. Association for the Advancement of Artificial Intelligence (AAAI), 5 2021. doi: 10.1609/aaai.v35i5.16514. URL https://arxiv.org/pdf/2101.00797.
  7. Unreproducible research is reproducible. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), International Conference on Machine Learning, volume 97, pp. 725–734. PMLR, 5 2019.
  8. Reproducible science. Infection and Immunity, 78(12):4972 – 4975, 12 2010. ISSN 0019-9567. URL https://europepmc.org/articles/pmc2981311?pdf=render.
  9. Towards training reproducible deep learning models. 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE), pp.  2202–2214, 5 2022. doi: 10.1145/3510003.3510163.
  10. Repeatability and benefaction in computer systems research. University of Arizona TR, 14(4), 2015.
  11. Principal neighbourhood aggregation for graph nets. ArXiv, abs/2004.05718, 4 2020. URL https://export.arxiv.org/pdf/2004.05718.
  12. Graph neural network-based anomaly detection in multivariate time series. ArXiv, abs/2106.06947:4027–4035, 6 2021. ISSN 2374-3468. doi: 10.1609/aaai.v35i5.16523. URL https://arxiv.org/pdf/2106.06947.
  13. Hypergraph neural networks. In AAAI Conference on Artificial Intelligence, pp. 3558–3565, 9 2018. doi: 10.1609/aaai.v33i01.33013558. URL https://ojs.aaai.org/index.php/AAAI/article/download/4235/4113.
  14. The concentration of fractional distances. IEEE Transactions on Knowledge and Data Engineering, 19(7):873–886, 7 2007. ISSN 1041-4347. doi: 10.1109/tkde.2007.1037.
  15. Computational reproducibility: state-of-the-art, challenges, and database research opportunities. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, 5 2012. doi: 10.1145/2213836.2213908.
  16. Magnn: Metapath aggregated graph neural network for heterogeneous graph embedding. Proceedings of The Web Conference 2020, 4 2020. doi: 10.1145/3366423.3380297. URL https://arxiv.org/pdf/2002.01680.
  17. Formal Concept Analysis: Mathematical Foundations. Springer-Verlag, Berlin, Heidelberg, 1st edition, 1997. ISBN 3540627715. doi: 10.1007/978-3-642-59830-2.
  18. Feature selection for regression problems based on the morisita estimator of intrinsic dimension: Concept and case studies. Pattern Recognit., 70:126–138, 1 2016.
  19. What does research reproducibility mean? Science Translational Medicine, 8(341):341ps12 – 341ps12, 6 2016. ISSN 1946-6234.
  20. A topological application of the isoperimetric inequality. American Journal of Mathematics, 105(4):843–854, 8 1983. ISSN 0002-9327. doi: 10.2307/2374298.
  21. Odd Erik Gundersen. The fundamental principles of reproducibility. ArXiv, abs/2011.10098, 11 2020.
  22. State of the art: Reproducibility in artificial intelligence. In Sheila A. McIlraith and Kilian Q. Weinberger (eds.), AAAI Conference on Artificial Intelligence, volume 32. Association for the Advancement of Artificial Intelligence (AAAI), 4 2018. doi: 10.1609/aaai.v32i1.11503. URL https://ojs.aaai.org/index.php/AAAI/article/download/11503/11362.
  23. On reproducible ai: Towards reproducible research, open science, and digital scholarship in ai publications. AI Mag., 39(3):56–68, 9 2018. ISSN 0738-4602. URL https://www.aaai.org/ojs/index.php/aimagazine/article/download/2816/2710.
  24. Sources of irreproducibility in machine learning: A review. ArXiv, abs/2204.07610, 4 2022a. ISSN 2331-8422. doi: 10.48550/arxiv.2204.07610.
  25. Do machine learning platforms provide out-of-the-box reproducibility? Future Gener. Comput. Syst., 126:34–47, 1 2022b. ISSN 0167-739X. doi: 10.1016/j.future.2021.06.014.
  26. Inductive representation learning on large graphs. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), NIPS, volume 30, pp.  1024–1034, 6 2017. URL https://papers.nips.cc/paper/6703-inductive-representation-learning-on-large-graphs.pdf.
  27. Adaptive learning rates for support vector machines working on data with low intrinsic dimension. The Annals of Statistics, 3 2020. doi: 10.48550/arxiv.2003.06202. URL https://arxiv.org/pdf/2003.06202.
  28. Training data influence analysis and estimation: A survey. ArXiv, abs/2212.04612, 12 2022. doi: 10.48550/arxiv.2212.04612. URL https://arxiv.org/pdf/2212.04612.
  29. Intrinsic dimension of geometric data sets. Tohoku Mathematical Journal, 74(1):23 – 52, 3 2022. ISSN 0040-8735. doi: 10.2748/tmj.20201015a. URL https://arxiv.org/pdf/1801.07985.
  30. Citation histories of papers: sometimes the rich get richer, sometimes they don’t. ArXiv, abs/1703.04746, 3 2017. ISSN 2331-8422.
  31. Lightgcn: Simplifying and powering graph convolution network for recommendation. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2 2020. doi: 10.1145/3397271.3401063. URL https://arxiv.org/pdf/2002.02126.
  32. Intrinsic dimensionality estimation of submanifolds in rd. Proceedings of the 22nd international conference on Machine learning, 2005.
  33. Graph convolutional networks for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 59:5966–5978, 8 2020. ISSN 0196-2892. doi: 10.1109/tgrs.2020.3015157. URL https://arxiv.org/pdf/2008.02457.
  34. Michael E. Houle. Dimensionality, discriminability, density and distance distributions. 2013 IEEE 13th International Conference on Data Mining Workshops, pp.  468–473, 12 2013. doi: 10.1109/icdmw.2013.139.
  35. Open graph benchmark: Datasets for machine learning on graphs. ArXiv, abs/2005.00687, 5 2020a. doi: 10.48550/arxiv.2005.00687.
  36. Heterogeneous graph transformer. Proceedings of The Web Conference 2020, 4 2020b. doi: 10.1145/3366423.3380027. URL https://dl.acm.org/doi/pdf/10.1145/3366423.3380027.
  37. ICLR. Reproducibility workshop series, 2019. URL https://sites.google.com/view/icml-reproducibility-workshop. [Online; accessed 2023-11-15].
  38. Reproducibility in scientific computing. ACM Computing Surveys (CSUR), 51(3):1 – 36, 7 2018. ISSN 0360-0300. doi: 10.1145/3186266.
  39. Graph structure learning for robust graph neural networks. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 8 2020. doi: 10.1145/3394486.3403049. URL https://dl.acm.org/doi/pdf/10.1145/3394486.3403049.
  40. How to find your friendly neighborhood: Graph attention design with self-supervision. ArXiv, abs/2204.04879, 4 2022. doi: 10.48550/arxiv.2204.04879. URL https://arxiv.org/pdf/2204.04879.
  41. Minimax rates for estimating the dimension of a manifold. J. Comput. Geom., 10:42–95, 5 2016. ISSN 1920-180X. doi: 10.20382/jocg.v10i1a3. URL https://inria.hal.science/hal-02425684/document.
  42. Thomas Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. ArXiv, abs/1609.02907, 9 2016. URL https://export.arxiv.org/pdf/1609.02907.
  43. The practice of reproducible research: case studies and lessons from the data-intensive sciences. Univ of California Press, 2018.
  44. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations, 9 2018. URL https://openreview.net/pdf?id=H1gL-2A9Ym.
  45. Understanding black-box predictions via influence functions. In Doina Precup and Yee Whye Teh (eds.), International Conference on Machine Learning, volume 70, pp.  1885–1894. JMLR.org, 3 2017. URL https://proceedings.mlr.press/v70/koh17a/koh17a.pdf.
  46. Influence of segmentation margin on machine learning–based high-dimensional quantitative ct texture analysis: a reproducibility study on renal clear cell carcinomas. European Radiology, 29(9):1–11, 2 2019. ISSN 0938-7994. doi: 10.1007/s00330-019-6003-8.
  47. You do not need more data: Improving end-to-end speech recognition by text-to-speech data augmentation. 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 439–444, 10 2020. doi: 10.1109/cisp-bmei51763.2020.9263564. URL https://arxiv.org/pdf/2005.07157.
  48. The effect of the intrinsic dimension on the generalization of quadratic classifiers. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Neural Information Processing Systems, volume 34, 12 2021. URL https://infoscience.epfl.ch/record/289659.
  49. Self-attention graph pooling. ArXiv, abs/1904.08082, 4 2019. URL https://arxiv.org/pdf/1904.08082.pdf.
  50. Deepgcns: Can gcns go as deep as cnns? 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp.  9266–9275, 10 2019. doi: 10.1109/iccv.2019.00936. URL https://arxiv.org/pdf/1904.03751.
  51. Lightning AI and Contributors. Lightning, 2022. URL https://github.com/Lightning-AI/lightning. [Online; accessed 2023-11-22].
  52. On the replicability and reproducibility of deep learning in software engineering. ArXiv, abs/2006.14244, 6 2020a. doi: 10.1145/3477535.
  53. Towards deeper graph neural networks. In Rajesh Gupta, Yan Liu, Jiliang Tang, and B. Aditya Prakash (eds.), Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp.  338–348. ACM, 8 2020b. doi: 10.1145/3394486.3403076. URL https://dl.acm.org/doi/pdf/10.1145/3394486.3403076.
  54. Encoding sentences with graph convolutional networks for semantic role labeling. ArXiv, abs/1703.04826, 3 2017. doi: 10.18653/v1/d17-1159. URL https://arxiv.org/pdf/1703.04826.pdf.
  55. Vitali Milman. The heritage of p. lévy in geometrical functional analysis. Astérisque, 157(158):273–301, 1988.
  56. Vitali Milman. Topics in asymptotic geometric analysis. In Visions in Mathematics, pp.  792–815. Springer, 2000. ISBN 978-3034604246. doi: 10.1007/978-3-0346-0425-3_8.
  57. Fractal-based intrinsic dimension estimation and its application in dimensionality reduction. IEEE Transactions on Knowledge and Data Engineering, 24(1):59–71, 1 2012. ISSN 1041-4347. doi: 10.1109/tkde.2010.225.
  58. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI Conference on Artificial Intelligence, pp. 4602–4609. Cornell University, 10 2018. doi: 10.1609/aaai.v33i01.33014602. URL https://export.arxiv.org/pdf/1810.02244.
  59. National Academies of Sciences, Engineering and Medicine. Reproducibility and Replicability in Science. The National Academies Press, Washington, DC, 9 2019. ISBN 978-0-309-48616-3. doi: 10.17226/25303. URL https://nap.nationalacademies.org/catalog/25303/reproducibility-and-replicability-in-science.
  60. Nature Special. Challenges in irreproducible research, 2018. URL https://www.nature.com/collections/prbfkwmwvz/. [Online; accessed 2023-11-15].
  61. paperswithcode. Tips for Publishing Research Code, 2021. URL https://github.com/paperswithcode/releasing-research-code. [Online; accessed 2023-11-15].
  62. Geom-gcn: Geometric graph convolutional networks. ArXiv, abs/2002.05287, 4 2020. URL https://arxiv.org/pdf/2002.05287.
  63. Roger D. Peng. Reproducible research in computational science. Science, 334(6060):1226 – 1227, 12 2011. ISSN 0036-8075. URL https://europepmc.org/articles/pmc3383002?pdf=render.
  64. The effectiveness of data augmentation in image classification using deep learning. ArXiv, abs/1712.04621, 12 2017. URL https://arxiv.org/pdf/1712.04621.pdf.
  65. Vladimir Pestov. On the geometry of similarity search: Dimensionality curse and concentration of measure. Inf. Process. Lett., 73:47–51, 1 1999. doi: 10.1016/S0020-0190(99)00156-8. URL https://arxiv.org/pdf/cs/9901004v1.
  66. Vladimir Pestov. An axiomatic approach to intrinsic dimension of a dataset. Neural networks : the official journal of the International Neural Network Society, 21(2-3):204–213, 12 2007a. doi: 10.1016/j.neunet.2007.12.030. URL https://arxiv.org/pdf/0712.2063v1.
  67. Vladimir Pestov. Intrinsic dimension of a dataset: what properties does one expect? 2007 International Joint Conference on Neural Networks, pp. 2959–2964, 8 2007b. doi: 10.1109/ijcnn.2007.4371431. URL https://arxiv.org/pdf/cs/0703125.
  68. Vladimir Pestov. Indexability, concentration, and vc theory. In J. Discrete Algorithms, volume 13, pp.  2–18. ACM, 9 2010a. doi: 10.1145/1862344.1862346.
  69. Vladimir Pestov. Intrinsic dimensionality. ArXiv, abs/1007.5318:8–11, 7 2010b. ISSN 1946-7729. doi: 10.1145/1862413.1862416. URL https://arxiv.org/pdf/1007.5318.pdf.
  70. Vladimir Pestov. Is the kk-nn classifier in high dimensions affected by the curse of dimensionality? Comput. Math. Appl., 65:1427–1437, 2011. doi: 10.1016/j.camwa.2012.09.011.
  71. Problems and opportunities in training deep learning software systems: An analysis of variance. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp.  771–783, 9 2020. doi: 10.1145/3324884.3416545.
  72. Joelle Pineau. The machine learning reproducibility checklist, v2.0, 2020. URL https://www.cs.mcgill.ca/~jpineau/ReproducibilityChecklist-v2.0.pdf. [Online; accessed 2023-11-15].
  73. Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program). JMLR, 22(164):1–20, 8 2021. URL http://jmlr.org/papers/v22/20-303.html.
  74. Hans Ekkehard Plesser. Reproducibility vs. replicability: A brief history of a confused terminology. Frontiers in Neuroinformatics, 11, 1 2018. ISSN 1662-5196. doi: 10.3389/fninf.2017.00076. URL https://www.frontiersin.org/articles/10.3389/fninf.2017.00076/pdf.
  75. The intrinsic dimension of images and its impact on learning. ArXiv, abs/2104.08894, 4 2021. URL https://export.arxiv.org/pdf/2104.08894.
  76. Derek J. Price. Networks of scientific papers. Science, 149 3683(3683):510–5, 7 1965. ISSN 0036-8075.
  77. Estimating training data influence by tracking gradient descent. ArXiv, abs/2002.08484, 2 2020. URL https://export.arxiv.org/pdf/2002.08484.
  78. Ml-schema: Exposing the semantics of machine learning with schemas and ontologies. ArXiv, abs/1807.05351, 7 2018.
  79. Gcc: Graph contrastive coding for graph neural network pre-training. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 6 2020. doi: 10.1145/3394486.3403168. URL https://arxiv.org/pdf/2006.09963.
  80. Edward Raff. A step toward quantifying independently reproducible machine learning research. ArXiv, abs/1909.06674, 9 2019.
  81. ReScience C. Rescience c journal, 2023. URL http://rescience.github.io/. [Online; accessed 2023-09-09].
  82. Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters, 24:279–283, 8 2016. doi: 10.1109/lsp.2017.2657381.
  83. Few-shot learning with graph neural networks. ArXiv, abs/1711.04043, 11 2017. URL https://export.arxiv.org/pdf/1711.04043.
  84. E (n) equivariant graph neural networks. In Marina Meila and Tong Zhang (eds.), International conference on machine learning, volume 139, pp.  9323–9332. PMLR, PMLR, 2 2021.
  85. Modeling relational data with graph convolutional networks. In Aldo Gangemi, Roberto Navigli, Maria-Esther Vidal, Pascal Hitzler, Raphaël Troncy, Laura Hollink, Anna Tordai, and Mehwish Alam (eds.), The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, volume 10843, pp. 593–607. Springer, Springer Science+Business Media, 2018. ISBN 978-3319934167. doi: 10.1007/978-3-319-93417-4_38. URL https://research.vu.nl/ws/files/246718572/Modeling_Relational_Data_with_Graph_Convolutional_Networks.pdf.
  86. The pitfalls of simplicity bias in neural networks, 6 2020. URL https://proceedings.neurips.cc/paper/2020/file/6cfe0e6127fa25df2a0ef2ae1067d915-Paper.pdf.
  87. How do deep-learning framework versions affect the reproducibility of neural network models? Machine Learning and Knowledge Extraction, 4(4):888–911, 10 2022. ISSN 2504-4990. doi: 10.3390/make4040045. URL https://www.mdpi.com/2504-4990/4/4/45/pdf?version=1665735645.
  88. M. Simonovsky and N. Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.  29–38, 7 2017. ISSN 1063-6919. doi: 10.1109/cvpr.2017.11. URL https://arxiv.org/pdf/1704.02901.
  89. ML Reproducibility Tools and Best Practices, 2020. URL https://koustuvsinha.com/practices_for_reproducibility. [Online; accessed 2023-11-15].
  90. Springer Nature. Reporting standards and availability of data, materials, code and protocols. Nature, 2020. URL https://www.nature.com/nature/editorial-policies/reporting-standards. [Online; accessed 2023-11-15].
  91. Setting the default to reproducible reproducibility in computational and experimental mathematics. ICERM Workshop, 46, 2013.
  92. Intrinsic dimension for large-scale geometric learning. Transactions on Machine Learning Research, 2023, 2023a. URL https://openreview.net/forum?id=85BfDdYMBY.
  93. Selecting features by their resilience to the curse of dimensionality. CoRR, abs/2304.02455, 2023b. doi: 10.48550/ARXIV.2304.02455.
  94. Scalable and adaptive graph neural networks with self-label-enhanced training. ArXiv, abs/2104.09376, 4 2021. ISSN 2331-8422. URL https://export.arxiv.org/pdf/2104.09376.
  95. Influence of machine learning vs. ranking algorithm on the critical dimension. International Journal of Future Computer and Communication, pp.  215–219, 2013. ISSN 2010-3751. doi: 10.7763/ijfcc.2013.v2.155.
  96. Estimation and inference with trees and forests in high dimensions. In Jacob Abernethy and Shivani Agarwal (eds.), Proceedings of Thirty Third Conference on Learning Theory, volume 125 of Proceedings of Machine Learning Research, pp.  3453–3454. PMLR, 09–12 Jul 2020. URL https://proceedings.mlr.press/v125/syrgkanis20a.html.
  97. A practical taxonomy of reproducibility for machine learning research. 2nd Reproducibility in Machine Learning Workshop at ICML, 6 2018. URL https://openreview.net/pdf?id=B1eYYK5QgX.
  98. What is the dimension of your binary data? Sixth International Conference on Data Mining (ICDM’06), pp. 603–612, 12 2006. ISSN 1550-4786. doi: 10.1109/icdm.2006.167.
  99. What makes for good views for contrastive learning. ArXiv, abs/2005.10243:6827–6839, 5 2020. URL https://arxiv.org/pdf/2005.10243.
  100. Fast feature selection using fractal dimension. J. Inf. Data Manag., 1(1):3–16, 5 2010. ISSN 2178-7107. doi: 10.1184/r1/6605570.v1.
  101. A method of data augmentation for classifying road damage considering influence on classification accuracy. In Imre J. Rudas, János Csirik, Carlos Toro, János Botzheim, Robert J. Howlett, and Lakhmi C. Jain (eds.), International Conference on Knowledge-Based Intelligent Information & Engineering Systems, volume 159, pp.  1449–1458. Elsevier BV, 2019. doi: 10.1016/j.procs.2019.09.315.
  102. Deep learning generalizes because the parameter-function map is biased towards simple functions. ICLR, 2019.
  103. Graph attention networks. ArXiv, abs/1710.10903, 10 2017. URL https://arxiv.org/pdf/1710.10903.
  104. On the Use of Machine Learning Models for Prediction of Compressive Strength of Concrete: Influence of Dimensionality Reduction on the Model Performance. Materials, 14(4):713, 2 2021. ISSN 1996-1944. doi: 10.3390/ma14040713. URL https://www.mdpi.com/1996-1944/14/4/713/pdf?version=1612864652.
  105. Dkn: Deep knowledge-aware network for news recommendation. In Pierre-Antoine Champin, Fabien L. Gandon, Mounia Lalmas, and Panagiotis G. Ipeirotis (eds.), Proceedings of the 2018 world wide web conference, pp.  1835–1844. ACM, 1 2018. doi: 10.1145/3178876.3186175.
  106. Kgat: Knowledge graph attention network for recommendation. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 5 2019a. doi: 10.1145/3292500.3330989.
  107. Neural graph collaborative filtering. Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 7 2019b. doi: 10.1145/3331184.3331267. URL https://arxiv.org/pdf/1905.08108.
  108. Heterogeneous graph attention network. The World Wide Web Conference, 5 2019c. doi: 10.1145/3308558.3313562. URL https://arxiv.org/pdf/1903.07293.
  109. Simplifying graph convolutional networks. In PMLR, 2018a.
  110. Session-based recommendation with graph neural networks. ArXiv, abs/1811.00855, 11 2018b. doi: 10.1609/aaai.v33i01.3301346.
  111. How powerful are graph neural networks? ArXiv, abs/1810.00826, 10 2018. doi: 10.48550/arxiv.1810.00826. URL https://arxiv.org/pdf/1810.00826.
  112. Graph convolutional networks for text classification. ArXiv, abs/1809.05679, 9 2018. doi: 10.48550/arxiv.1809.05679. URL https://arxiv.org/pdf/1809.05679.
  113. Hierarchical graph representation learning with differentiable pooling. Neural Information Processing Systems, pp.  4805–4815, 6 2018. URL https://export.arxiv.org/pdf/1806.08804.
  114. Graph contrastive learning with augmentations. ArXiv, abs/2010.13902:5812–5823, 10 2020. URL https://proceedings.neurips.cc/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-Paper.pdf.
  115. Graph transformer networks. Neural Information Processing Systems, 32:11960–11970, 11 2019. URL https://papers.nips.cc/paper/9367-graph-transformer-networks.pdf.
  116. Neural motifs: Scene graph parsing with global context. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  5831–5840, 11 2017. doi: 10.1109/cvpr.2018.00611. URL https://export.arxiv.org/pdf/1711.06640.
  117. Heterogeneous graph neural network. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 7 2019. doi: 10.1145/3292500.3330961. URL https://dl.acm.org/doi/pdf/10.1145/3292500.3330961.
  118. Link prediction based on graph neural networks. Neural Information Processing Systems, 31:5171–5181, 12 2018. URL https://arxiv.org/pdf/1802.09691.pdf.
  119. Graph convolution over pruned dependency trees improves relation extraction. ArXiv, abs/1809.10185, 9 2018. doi: 10.18653/v1/d18-1244. URL https://arxiv.org/pdf/1809.10185.
  120. Beyond homophily in graph neural networks: Current limitations and effective designs. arXiv: Learning, 6 2020. doi: 10.48550/arxiv.2006.11468. URL https://arxiv.org/pdf/2006.11468.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tobias Hille (3 papers)
  2. Maximilian Stubbemann (15 papers)
  3. Tom Hanika (37 papers)

Summary

We haven't generated a summary for this paper yet.