Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Median activation functions for graph neural networks (1810.12165v2)

Published 29 Oct 2018 in cs.LG and stat.ML

Abstract: Graph neural networks (GNNs) have been shown to replicate convolutional neural networks' (CNNs) superior performance in many problems involving graphs. By replacing regular convolutions with linear shift-invariant graph filters (LSI-GFs), GNNs take into account the (irregular) structure of the graph and provide meaningful representations of network data. However, LSI-GFs fail to encode local nonlinear graph signal behavior, and so do regular activation functions, which are nonlinear but pointwise. To address this issue, we propose median activation functions with support on graph neighborhoods instead of individual nodes. A GNN architecture with a trainable multirresolution version of this activation function is then tested on synthetic and real-word datasets, where we show that median activation functions can improve GNN capacity with marginal increase in complexity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Luana Ruiz (34 papers)
  2. Fernando Gama (43 papers)
  3. Antonio G. Marques (78 papers)
  4. Alejandro Ribeiro (281 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.