Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Graph Learning

Updated 6 July 2025
  • Federated Graph Learning is a distributed paradigm that combines graph neural networks with federated learning to train models on decentralized, private graph data.
  • It employs techniques such as parameter aggregation, self-supervision, and structure proxy alignment to enhance model performance and mitigate local bias.
  • FGL supports diverse applications from drug discovery to fraud detection by efficiently handling heterogeneous graph structures under strict privacy constraints.

Federated Graph Learning (FGL) is a distributed machine learning paradigm that integrates graph neural network (GNN) models with federated learning, enabling multiple clients—each with private graph-structured data—to train shared or personalized models without directly sharing their raw data. FGL addresses the "isolated data islands" challenge, providing a framework to mine and analyze relational data distributed across organizations under privacy constraints (2105.03170, 2105.11099). By introducing mechanisms such as parameter aggregation, self-supervision, structure proxy alignment, and adaptive graph imputation, FGL enhances both the global representation and the performance of local models on heterogeneous datasets.

1. FGL Taxonomy and Problem Settings

A rigorous classification of FGL settings is established based on how graph data is partitioned among clients (2105.11099):

  1. Inter-graph FGL: Each client holds entire (often small) graphs, typical in molecular property prediction. Modeling and aggregation target graph-level outputs.
  2. Intra-graph FGL: The global graph is partitioned among clients. This comprises:
    • Horizontal FGL: Each client stores a subset of nodes (subgraph) from an underlying global graph, with the same feature/label spaces.
    • Vertical FGL: All clients share the same node identities but differing feature/label spaces, requiring joint representation learning.
  3. Graph-structured FGL: Clients themselves form nodes in a meta-level graph (e.g., devices in a sensor network), with their links guiding model aggregation (2105.11099).

This taxonomy informs diverse applications, from drug discovery and financial fraud detection to federated traffic flow analysis. Each division creates unique challenges in model design, communication, and security.

2. Core Algorithms and Federated Optimization

The canonical FGL workflow cycles through local training, information aggregation, and global update (2105.03170, 2110.12906):

  • Local Model Training: Clients fit local GNNs (e.g., GCN, GraphSAGE), processing only their subgraphs or datasets.
  • Information Upload: Instead of sharing raw data, clients may transmit model parameters, prediction outputs, or embeddings to a server.
  • Aggregation and Update: The server computes the global model, commonly through weighted averaging (e.g., FedAvg: Wˉ=k=1KNkMWk\bar{W} = \sum_{k=1}^{K} \frac{N_k}{M} W_k with NkN_k client node count, M=kNkM=\sum_k N_k), and optionally generates auxiliary global signals (pseudo labels, pseudo graphs).

A prototypical FGL loss function for each client is constructed as:

L=LGCN+αLSSL\mathcal{L} = \mathcal{L}_{\mathrm{GCN}} + \alpha \mathcal{L}_\mathrm{SSL}

where LGCN\mathcal{L}_{\mathrm{GCN}} is the cross-entropy on real labels and LSSL\mathcal{L}_\mathrm{SSL} a self-supervised loss (such as on pseudo labels).

FedGL (2105.03170) initiates a global self-supervision scheme: clients upload softmax predictions and node embeddings, from which the server constructs “global pseudo labels” (confidence-thresholded fused predictions) and “global pseudo graphs” (embedding similarity–based connectivity). These augment local training data, enabling higher quality learning even with sparse or non-IID supervision.

3. Handling Heterogeneity and Local Bias

Heterogeneity in node features, labels, or structure is a pervasive issue in FGL (2110.12906, 2408.09393). For horizontally partitioned graphs, the absence of cross-client edges leads to “local bias”—models overfit to local data and diverge from what would be learned in centralized settings.

Bias Mitigation Mechanisms:

  • Full Cross-Client Edge Utilization: Distributed computation and communication protocols are designed so that clients assemble the necessary cross-client messages to recover centralized message passing accuracy (2110.12906). For example, distributed GCN updates decompose the Laplacian and orchestrate partial aggregation across clients:

Hi(l)=j=1mLijZj(l1)W(l)H^{(l)}_i = \sum_{j=1}^{m} L_{ij} Z^{(l-1)}_{j} W^{(l)}

  • Label-Guided Sampling: To reduce computational overhead and balance class distributions, label-guided subgraph sampling (with sampling probabilities informed by class frequency) is applied.
  • Proxy Alignment: Structural or class-wise structure proxies in the latent space can also be aligned globally (e.g., FedSpray (2408.09393)), acting as unbiased reference signals for minority or marginalized nodes.

Empirical evaluation demonstrates that such methods notably reduce local bias, leading to accuracy improvements and faster convergence compared with parameter-only exchange (2110.12906, 2408.09393).

4. Scalability, Efficiency, and Privacy

Scalable, resource-efficient, and privacy-preserving methods are essential for real-world FGL deployment (2401.11755, 2406.10616, 2105.03170).

Scalability:

  • Topology-aware approaches such as FedGTA (2401.11755) employ metrics like local smoothing confidence and mixed neighbor moments to direct personalized aggregation, showing scalability to graphs with over 10810^8 nodes and 10910^9 edges.
  • Hierarchical systems like HiFGL (2406.10616) organize clients in a three-level architecture (device-client, silo-client, server), supporting both cross-silo and cross-device federated learning.

Communication and Computation Efficiency:

  • Methods implement sparse or low-bit quantization, narrow layers, or selective parameter sharing (e.g., sharing only structural channel parameters (2408.11662)) to reduce bandwidth and computation.
  • Communication-efficient aggregation strategies utilize local stochastic gradient steps and compressed model updates (2412.13442).

Privacy Preservation:

  • Instead of raw features or edges, clients upload low-dimensional embeddings, soft outputs, or aggregated statistics (2105.03170).
  • Advanced techniques include neighbor-agnostic aggregation and polynomial-based secret sharing (as in HiFGL’s Secret Message Passing) to prevent subgraph or node-level data leakage (2406.10616).

5. Experimental Benchmarks and Results

Evaluation across multiple datasets (Cora, CiteSeer, ACM, Wiki, ogbn-papers100M, etc.) with varying partition scenarios demonstrates FGL algorithms’ superiority over local-only or vanilla federated learning baselines (2105.03170, 2401.11755, 2110.12906). Key results include:

  • FedGL (2105.03170) surpasses even centralized training by 27%2{-}7\% in some splits, attributed to the enrichment from global self-supervision.
  • Distributed edge utilization and label-guided sampling (as in (2110.12906)) bring local model outputs close to centralized optima.
  • Scalability and robustness (e.g., in FedGTA (2401.11755)) are maintained across thousands of clients and billion-edge graphs, with efficient aggregation maintaining accuracy.

Ablation studies validate the importance of pseudo label/graph construction, personalized aggregation, and bias-mitigation components.

6. Open Challenges and Research Directions

Despite advances, several challenges persist (2105.11099, 2401.11755, 2406.10616):

  • Non-IID graph structure: Varying topological statistics (degree distributions, clustering, path lengths) complicate convergence and degrade model performance.
  • Missing cross-client edges: In horizontal FGL settings, reconstructing or imputing latent connections remains nontrivial, motivating research into efficient edge inference and imputation generators.
  • Entity matching and privacy in vertical FGL: Secure and communication-efficient joining of vertically partitioned node features across organizations is necessary.
  • Fairness and minority node representation: Improvements remain possible for structurally marginalized or minority classes in subgraph-FL (2504.09963).
  • Dataset and evaluation standardization: Benchmarks such as OpenFGL (2408.16288) highlight heterogeneity in task, domain, and simulation strategy, advocating for systematic evaluation frameworks.

Anticipated research trends include adaptive aggregation, multi-level hierarchy, privacy guarantees, efficient knowledge transfer, and enhanced fairness/personalization mechanisms.

7. Summary Table: Representative FGL Approaches

Approach Key Principle Privacy Mechanism Scalability/Notes
FedGL (2105.03170) Global self-supervision Embeddings/soft outputs Node classification, cross-domain
(2110.12906) Cross-client edge utilization Cross-client message orchestration Reduces local bias, improves convergence
FedGTA (2401.11755) Topology-aware aggregation Personalized update, model-agnostic Scales to 100M+ nodes, robust under splits
HiFGL (2406.10616) Hierarchical privacy, SecMP Neighbor-agnostic aggregation, encoding Versatile cross-silo/device, complexity-analyzed
FedSpray (2408.09393) Structure proxy alignment Lightweight, no raw data exchange Unbiased minority node training
FedDense (2408.11662) Dual-densely connected GNNs Selective parameter sharing Efficient FLOPs, multi-domain

Federated Graph Learning advances collaborative modeling in real-world scenarios where data are inherently distributed and privacy-sensitive, offering robust, scalable, and privacy-aware solutions for graph neural network training across domains.