Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large-scale graph representation learning with very deep GNNs and self-supervision (2107.09422v1)

Published 20 Jul 2021 in cs.LG, cs.AI, cs.SI, and stat.ML

Abstract: Effectively and efficiently deploying graph neural networks (GNNs) at scale remains one of the most challenging aspects of graph representation learning. Many powerful solutions have only ever been validated on comparatively small datasets, often with counter-intuitive outcomes -- a barrier which has been broken by the Open Graph Benchmark Large-Scale Challenge (OGB-LSC). We entered the OGB-LSC with two large-scale GNNs: a deep transductive node classifier powered by bootstrapping, and a very deep (up to 50-layer) inductive graph regressor regularised by denoising objectives. Our models achieved an award-level (top-3) performance on both the MAG240M and PCQM4M benchmarks. In doing so, we demonstrate evidence of scalable self-supervised graph representation learning, and utility of very deep GNNs -- both very important open issues. Our code is publicly available at: https://github.com/deepmind/deepmind-research/tree/master/ogb_lsc.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Ravichandra Addanki (3 papers)
  2. Peter W. Battaglia (15 papers)
  3. David Budden (29 papers)
  4. Andreea Deac (15 papers)
  5. Jonathan Godwin (14 papers)
  6. Thomas Keck (10 papers)
  7. Wai Lok Sibon Li (3 papers)
  8. Alvaro Sanchez-Gonzalez (25 papers)
  9. Jacklynn Stott (4 papers)
  10. Shantanu Thakoor (15 papers)
  11. Petar Veličković (81 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.