Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Matching Networks for Learning the Similarity of Graph Structured Objects (1904.12787v2)

Published 29 Apr 2019 in cs.LG and stat.ML

Abstract: This paper addresses the challenging problem of retrieval and matching of graph structured objects, and makes two key contributions. First, we demonstrate how Graph Neural Networks (GNN), which have emerged as an effective model for various supervised prediction problems defined on structured data, can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning. Second, we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism. We demonstrate the effectiveness of our models on different domains including the challenging problem of control-flow-graph based function similarity search that plays an important role in the detection of vulnerabilities in software systems. The experimental analysis demonstrates that our models are not only able to exploit structure in the context of similarity learning but they can also outperform domain-specific baseline systems that have been carefully hand-engineered for these problems.

Citations (476)

Summary

  • The paper introduces a novel graph similarity framework using propagation layers, GRUs, and attention mechanisms to capture complex graph structures.
  • It demonstrates that strategic weight initialization and layered propagation enhance model stability and performance on graph edit distance and binary function similarity tasks.
  • Experimental results show robust generalization across varied graph sizes, offering promising directions for future dynamic feature representation research.

Deep Graph Similarity Learning: Model Architectures and Experimental Insights

Introduction

The paper presents a thorough analysis of model architectures for deep graph similarity learning, focusing on graph embedding and matching models. It offers detailed insights into the technical aspects of these models, including the use of specific neural network components and initialization strategies. Furthermore, it provides experimental results on tasks such as graph edit distance learning and binary function similarity search.

Model Architectures

The core architecture utilizes propagation layers that incorporate a multi-layer perceptron (MLP) with a hidden layer. The node state vectors are dimensionally transformed into twice their original size, benefiting from weight initialization scaled down to stabilize training. This approach prevents large message vector summations that could hamper learning efficiency.

Graph Recurrent Units (GRUs) outperform MLPs for node function modules, serving as the primary mechanism for propagating node information. The model employs a linear layer for node transformations and integrates a logistic sigmoid function to modulate outputs. The attention weights central to the matching model leverage both Euclidean and dot-product similarities.

Experimental Details

Graph Edit Distance Learning

The experiments involved graph structures with no additional node or edge features. Initializations default to vectors of ones, while the encoder MLP is linear for nodes. The authors investigate various hyperparameter settings, noting superior performance with pair training and the utility of parameter sharing across propagation layers.

The Weisfeiler–Lehman (WL) kernel is juxtaposed against the model, highlighting its variable-sized representation based on graph size. Though beneficial, the fixed-sized graph vector in their models offers consistent performance, with generalization evident as models trained on smaller graphs perform on moderately larger ones.

The experimental results underscore model efficacy across varied graph sizes and feature configurations. Specifically, generalization performance on graphs beyond the trained-on range is observed, although with diminishing returns as graph complexity increases.

In binary function similarity tasks, the methodology simplifies as edge and node initializations are uniform vectors, with node representations derived from operator embedding sums. Here, triplet training yields slightly better outcomes than pairwise approaches, and GRUs prevail for node state transitions.

Results from additional datasets, such as those compiled for compression software, illustrate the generalization capabilities and robustness of the models. However, small dataset sizes expose them to overfitting, which needs attention in such contexts.

Attention Visualization

Attention mechanisms are visualized across propagation steps, revealing the stability and accuracy of attention alignment even as propagation layers increase. The model maintains effective graph matching across isomorphic structures, despite inherent symmetries that challenge representation precision.

Implications and Future Directions

The findings in this paper underscore the potential for advanced graph neural network architectures to enhance graph similarity tasks. The use of GRUs and strategic weight initialization offers promising avenues for increasing accuracy and stability in complex graph comparisons. The results pave the way for future exploration into more dynamic feature representations and model adaptations that cater to increasingly large and intricate graph datasets. Expanding these models to learn more sophisticated similarity metrics and to handle diverse types of graph data holds significant promise in advancing similarity learning across multiple domains.