2000 character limit reached
Pooling in Graph Convolutional Neural Networks (2004.03519v1)
Published 7 Apr 2020 in eess.SP and cs.LG
Abstract: Graph convolutional neural networks (GCNNs) are a powerful extension of deep learning techniques to graph-structured data problems. We empirically evaluate several pooling methods for GCNNs, and combinations of those graph pooling methods with three different architectures: GCN, TAGCN, and GraphSAGE. We confirm that graph pooling, especially DiffPool, improves classification accuracy on popular graph classification datasets and find that, on average, TAGCN achieves comparable or better accuracy than GCN and GraphSAGE, particularly for datasets with larger and sparser graph structures.
- Mark Cheung (10 papers)
- John Shi (7 papers)
- Lavender Yao Jiang (7 papers)
- Oren Wright (6 papers)
- José M. F. Moura (118 papers)