Papers
Topics
Authors
Recent
Search
2000 character limit reached

Influence Propagation Convolution

Updated 21 January 2026
  • Influence Propagation Convolution is a graph operation that integrates diffusion filtering and attention-based message passing to capture influence dynamics.
  • It employs a geometric decay mechanism via the parameter α to control multi-hop feature aggregation and mitigate over-smoothing in deep graph networks.
  • The framework connects Gaussian MRF optimality with practical implementations in social influence prediction and scalable semi-supervised learning.

Influence Propagation Convolution (IPC) is a graph operation arising in both deep learning and statistical modeling contexts, designed to capture influence dynamics by modulating feature aggregation across network structures. IPC encompasses a family of operators motivated by propagation mechanisms, including attention-based message passing and diffusion-style filtering, that integrate local graph topology and heterogeneous node attributes. Recent formulations connect IPC directly to optimal conditional expectations under Gaussian Markov random field (MRF) models, establishing both its theoretical optimality and its practical utility as a principled smoothing operator in graph-based semi-supervised learning.

1. Fundamental Definitions and Mathematical Formulation

IPC operates on graphs G=(V,E)G=(V,E) with node features xu∈Rfx_u\in\mathbb{R}^f for u∈Vu\in V. In the model described by Zhu et al. ("A Unifying Generative Model for Graph Learning Algorithms" (Jia et al., 2021)), both node features and labels are jointly modeled as Gaussian MRFs with precision matrix

Ω=H⊗In+diag(h)⊗L,\Omega = H\otimes I_n + \text{diag}(h) \otimes L,

where L=In−D−1/2AD−1/2L=I_n-D^{-1/2}AD^{-1/2} is the normalized Laplacian, HH governs on-node correlations, and hh encodes homophily weights. The conditional expectation of labels given features, E[y∣F]E[y|F], yields the IPC operator

T=(I+ωL)−1=(1−α)∑k=0∞αkSk,T = (I + \omega L)^{-1} = (1-\alpha)\sum_{k=0}^\infty \alpha^k S^k,

with S=D−1/2AD−1/2S=D^{-1/2}AD^{-1/2}, α=ω/(1+ω)\alpha=\omega/(1+\omega). IPC therefore applies a geometric decay of influence over kk-hop paths.

Empirically, IPC is realized through multi-layer graph convolutions with attention mechanisms ("DeepInf: Social Influence Prediction with Deep Learning" (Qiu et al., 2018)), or via flow-based aggregation as in Flow Graph Networks ("Tracing the Propagation Path: A Flow Perspective" (Wang et al., 2019)).

2. Influence Dynamics and Propagation Mechanisms

IPC explicitly models social or information influence with three principal mechanisms:

  • Attention-based weighting (αu→v\alpha_{u\to v}) allows differential integration of signals from influential neighbors, reflecting activation or connectivity patterns (Qiu et al., 2018).
  • Multi-hop propagation stacks LL graph convolutional layers, so that each node's representation encodes up to LL-hop neighbor influence with geometric decay (Jia et al., 2021).
  • Gating mechanisms (e.g., GRU-style gates) provide individual nodes the ability to regulate the acceptance of new messages versus retention of previous states, capturing inertia and resistance to influence (Qiu et al., 2018).

The diffusion perspective afforded by IPC parameterizes the rate of smoothing (via α\alpha or ω\omega), allowing continuous control from no smoothing (α≈0\alpha\approx0) to full averaging (α→1\alpha\to 1) (Jia et al., 2021).

3. Algorithmic Implementations and Workflow

IPC algorithms are instantiated in the following forms:

  • Attention-based Influence-Propagation Convolution (DeepInf): Each layer computes message attention logits via LeakyReLU applied to concatenated projected neighbor states,

eu→v(ℓ)=LeakyReLU(a(ℓ)T[W(ℓ)hu(ℓ) ∥ W(ℓ)hv(ℓ)])e_{u\to v}^{(\ell)} = \text{LeakyReLU}(a^{(\ell)T}[W^{(\ell)}h_u^{(\ell)}\, \|\, W^{(\ell)}h_v^{(\ell)}])

followed by softmax normalization, aggregation, non-linearity, and optional gating (Qiu et al., 2018).

  • Diffusion-based IPC (Gaussian MRF): Labels or features are propagated using the operator T=(I+ωL)−1T=(I+\omega L)^{-1} or its power-series approximation. Pseudocode (Jia et al., 2021):

1
2
3
4
F^(0) = F
For t=1,... until convergence:
    F^(t+1) = α S F^(t) + (1 - α) F
Final output: F' = (I + ω L)^(-1) F

  • Flow-based Neighborhood Aggregation (FlowGN): Random-walks or explicit path sampling carry source features across up to ll-hop paths, accumulating flows and aggregating them for node updates.

hvk=σ(Wk[hvk−1∥AGGREGATE{f∈C(v)}])h_v^k = \sigma\left(W^k \left[h_v^{k-1} \| \mathrm{AGGREGATE}\{f \in \mathcal{C}(v)\}\right]\right)

(Wang et al., 2019).

IPC differs fundamentally from fixed-depth KK-layer Graph Convolutional Networks (GCNs) and Simple Graph Convolutions (SGC):

Operator Propagation Mechanism Smoothing Control
IPC Geometric decay (αk\alpha^k) Continuous (α\alpha)
SGC Fixed-depth averaging (SKS^K) Discrete (KK)
LP Solves under label constraints Analogous optimizer

IPC’s geometric decay modulates the contribution of distant neighbors, mitigating over-smoothing that afflicts deep GCNs, which can force all outputs towards a constant subspace as K→∞K\to\infty (Jia et al., 2021). Flow-based propagators (FlowGN) further decouple propagation horizon from network depth, allowing deeper feature transformations without excessive neighborhood mixing, and showing improved empirical accuracy and scalability (Wang et al., 2019).

5. Computational Complexity and Scalability

The principal scaling factors for IPC implementations are:

  • Diffusion-based IPC is computed either via iterative sparse multiplication (O(K∣E∣p)O(K|E|p) for power-series depth KK) or direct sparse linear solves (conjugate gradient, OO(number of iterations ∣E∣p|E|p)) (Jia et al., 2021).
  • Flow-based IPC (FlowGN) samples M≈r∣V∣M\approx r|V| paths of length ll. The total computation per layer is O(Mld+∣V∣d)O(Ml d + |V|d), scalable for small rrl (Wang et al., 2019).
  • Attention-based IPC (DeepInf) involves attention logit computation per edge, aggregation, and non-linear transformation on subgraphs of size KK (Qiu et al., 2018).

Empirical benchmarks indicate that flow- and diffusion-based IPCs scale favorably relative to recursive neighborhood-expansions in classical GCNs.

6. Theoretical Optimality, Expressivity, and Identifiability

IPC is rigorously characterized as the L2L_2-optimal predictor under the conditional expectation E[y∣F]E[y|F] for Gaussian MRFs (Jia et al., 2021). The operator TT acts as a low-pass filter in the eigenbasis of SS, preserving the zero-frequency component and attenuating high-frequency noise.

IPC offers richer expressivity than SGC, as its smoothing is indexed by a continuous parameter α∈(0,1)\alpha\in(0,1), rather than a discrete depth KK. Tuning α\alpha by cross-validation consistently recovers true homophily strengths in the underlying data generation. Error bounds are computable via the covariance traces of the conditional variance (Jia et al., 2021).

7. Applications and Hyperparameter Choices

IPC has been deployed in social influence prediction tasks, where the goal is to estimate the likelihood a user will be activated based on neighbor states and attributes (Qiu et al., 2018). Hyperparameters influencing IPC performance and interpretability include:

  • Propagation depth LL (or path length ll for FlowGN)
  • Smoothing parameter α\alpha (diffusion strength)
  • Neighborhood sample size KK (DeepInf)
  • Hidden dimension dd of the embeddings
  • Dropout rates and number of attention heads (DeepInf)
  • Nonlinearity choice (ReLU, ELU, etc.)
  • Gating inclusion (GRU-style versus simple activation)

IPC-based models have demonstrated state-of-the-art accuracy and runtime efficiency on node classification benchmarks (Cora, Citeseer, Pubmed, Coauthor) (Wang et al., 2019).


The Influence Propagation Convolution framework unifies feature and label propagation, diffusion filtering, and attention-based message passing within a single rigorous paradigm, yielding scalable and expressive algorithms for graph-based representation learning and social influence prediction.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Influence Propagation Convolution.