Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Powerful Graph Neural Nets Necessary? A Dissection on Graph Classification (1905.04579v3)

Published 11 May 2019 in cs.LG, cs.SI, and stat.ML

Abstract: Graph Neural Nets (GNNs) have received increasing attentions, partially due to their superior performance in many node and graph classification tasks. However, there is a lack of understanding on what they are learning and how sophisticated the learned graph functions are. In this work, we propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction. To study the importance of both parts, we propose to linearize them separately. We first linearize the graph filtering function, resulting Graph Feature Network (GFN), which is a simple lightweight neural net defined on a \textit{set} of graph augmented features. Further linearization of GFN's set function results in Graph Linear Network (GLN), which is a linear function. Empirically we perform evaluations on common graph classification benchmarks. To our surprise, we find that, despite the simplification, GFN could match or exceed the best accuracies produced by recently proposed GNNs (with a fraction of computation cost), while GLN underperforms significantly. Our results demonstrate the importance of non-linear set function, and suggest that linear graph filtering with non-linear set function is an efficient and powerful scheme for modeling existing graph classification benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ting Chen (148 papers)
  2. Song Bian (21 papers)
  3. Yizhou Sun (149 papers)
Citations (82)

Summary

  • The paper introduces a novel decomposition of GNNs into graph filtering and set functions to analyze their individual roles in graph classification.
  • It demonstrates that linearizing the graph filtering component into the Graph Feature Network (GFN) can match or exceed state-of-the-art performance with reduced computational costs.
  • The findings stress that non-linear set functions are essential for performance, prompting a reevaluation of the necessity for complex GNN architectures.

Dissecting the Necessity of Complex Graph Neural Networks in Graph Classification

The paper of Graph Neural Networks (GNNs) has predominantly focused on their enhanced capabilities in complex graph tasks such as node and graph classification. Although widely acknowledged as powerful tools for various graph-related problems, there remains a persistent question concerning the true necessity of their complex architectures. This paper provides a comprehensive dissection of GNNs, specifically targeting their efficacy in graph classification tasks to separate the critical components contributing to their success.

The authors introduce a novel paradigm by decomposing GNNs into two distinct parts: graph filtering and the set function. The graph filtering process involves neighbor aggregation to gather information from neighboring nodes, while the set function composes these features for final prediction. A primary focus of this work is to paper each component's significance separately by simplifying or "linearizing" them.

Initially, the graph filtering component is linearized, leading to what the authors propose as the Graph Feature Network (GFN). GFN serves as a lightweight neural network formulated on a set of augmented graph features, thereby reducing computational demands while retaining efficient functionality. Taking a step further, the set function in GFN is linearized, culminating in the Graph Linear Network (GLN), which is essentially a linear representation of the graph features.

Across diverse graph classification benchmarks, GFN achieves, and at times surpasses, the performance of contemporary GNNs while utilizing significantly reduced computational resources. However, GLN shows marked underperformance compared to both GFN and state-of-the-art GNNs. This discrepancy underscores the significance of non-linear set functions over non-linear graph filtering mechanisms, suggesting that linear graph filtering combined with non-linear set functions constitutes an effective and efficient framework for current graph classification benchmarks.

The investigation leads to an important implication about the architectural needs for graph classification networks. It suggests a reassessment of complex non-linear graph filtering functionalities found in many modern GNNs, as their contribution to performance might be overestimated, given the empirical success of simpler models like GFN.

Practically, the efficiency presented by GFN can catalyze applications requiring real-time processing or systems with limited computational capacity. Theoretically, these findings open pathways for revisiting the design of graph-based deep learning models, encouraging the exploration of novel architectures that balance simplicity, efficiency, and performance. Future work might include testing these simplified architectures on more challenging datasets, potentially adjusting the function complexity as per the intrinsic properties of the data.

In summary, this paper presents a methodical examination of GNN architectures, challenging the perceived need for complicated, resource-heavy models in certain graph classification scenarios. This results in encouragement for the broader adoption of resource-efficient architectures like GFN in practical applications, sparking a discourse on the scalable optimization of GNN frameworks.