Papers
Topics
Authors
Recent
Search
2000 character limit reached

Contextual Graph Markov Model (CGMM)

Updated 21 January 2026
  • Contextual Graph Markov Model (CGMM) is a deep generative model that builds layered representations by diffusing frozen neighbor states to capture l-hop contexts.
  • It employs a layer-wise probabilistic approach with Expectation-Maximization training, ensuring cycle-free inference and scalable computation on cyclic, directed, or undirected graphs.
  • The model aggregates hidden-state assignments into fixed-size graph fingerprints for downstream tasks like classification, demonstrating strong performance on benchmarks such as INEX2005, MUTAG, CPDB, and AIDS.

A Contextual Graph Markov Model (CGMM) is a deep, generative probabilistic model designed for the processing and analysis of general graphs, including structures that are cyclic, directed, or undirected. CGMMs construct a hierarchical stack of local graphical models, each layer encoding vertex and edge structure while diffusing neighborhood context efficiently through the graph. The resulting structure supports scalable learning and yields discriminative feature representations for downstream tasks such as graph classification. Two closely related lines of development are highlighted in recent literature: first, the deep generative CGMM framework itself (Bacciu et al., 2018), and second, Contextual Markov Networks (CMNs) and their structure-learning via marginal pseudo-likelihood (Pensar et al., 2021).

1. Layer-wise Probabilistic Architecture

CGMM builds a stack of LL layers, where each layer learns to encode vertex labels conditioned on the hidden states of neighboring vertices inferred in previous layers. The process is constructive:

  • Layer 1 assigns a hidden state Qu(1){1,,C}Q_u^{(1)} \in \{1,\dots,C\} to each vertex uu, from which the observed label yuy_u is emitted, independently of graph structure.
  • Layer l>1l>1 uses the inferred states {qv(l):vNe(u),l<l}\{ q_v^{(l')} : v \in Ne(u), l' < l \} of neighbors from previous layers as "frozen" context, conditioning each new hidden state Qu(l)Q_u^{(l)} on this context before generating yuy_u.

By stacking layers, the receptive field of each node grows to cover progressively larger ll-hop neighborhoods. The key property is that within a single layer, dependencies are only on frozen states from lower layers, resulting in cycle-free (i.e., no inference loops) computation regardless of graph topology (Bacciu et al., 2018).

2. Mathematical Formulation

Formally, the likelihood at layer ll for a training set of graphs G\mathcal G is:

L(l)(θ)=gGuVgi=1CP(yuQu=i)  P(Qu=iq^Ne(u)L1(l))\mathcal{L}^{(l)}(\theta) = \prod_{\mathbf{g} \in \mathcal{G}} \prod_{u \in V_g} \sum_{i=1}^C P(y_u \mid Q_u = i) \; P(Q_u = i \mid \hat{\mathbf{q}}^{L^{-1}(l)}_{Ne(u)} )

where q^Ne(u)L1(l)\hat{\mathbf{q}}^{L^{-1}(l)}_{Ne(u)} encapsulates frozen neighbor states from previous layers. Direct conditioning on entire neighbor configurations is combinatorial; CGMM resolves this with two latent "switch" variables—LuL_u selects the context-providing previous layer, and SuS_u selects a relevant edge label class—resulting in the recursive decomposition:

P(Qu=iq^Ne(u)L1(l))=l<lP(Lu=l)a=1AP(l)(Su=a)P(l,a)(Qu=iq^Nel,a(u))P(Q_u = i \mid \hat{\mathbf{q}}^{L^{-1}(l)}_{Ne(u)}) = \sum_{l'<l} P(L_u = l') \sum_{a=1}^A P^{(l')}(S_u = a) P^{(l', a)}(Q_u = i \mid \hat{\mathbf{q}}_{Ne^{l', a}(u)} )

with further averaging over neighbors in the selected layer and edge class. The context-diffusion principle is implicit: new hidden states are assigned to maximize posterior probability conditioned on observed labels and frozen neighbor context.

3. Training and Scalability

Each layer is trained independently using the Expectation-Maximization (EM) algorithm. In the E-step, the responsibilities

γuilaj=P(Qu=i,Lu=l,Su=a,qv=jyu,q^Ne(u))\gamma_{u i l a j} = P(Q_u=i, L_u=l, S_u=a, q_v=j \mid y_u, \hat{q}_{Ne(u)})

are computed for all latent variables, and in the M-step, parameters for multinomial distributions are updated. This layer-wise EM paradigm ensures no loopy inference is required within layers. The per-layer computational complexity for a graph gg is

O(VgL1(l)AC2)\mathcal{O}(|V_g|\,|L^{-1}(l)|\,A\,C^2)

where Vg|V_g| is the number of vertices, L1(l)|L^{-1}(l)| the number of lower layers, AA the edge label classes, and CC the hidden state count. Because L1(l)|L^{-1}(l)| and CC are typically small (tens), this yields high efficiency, with scalability linear in the number of vertices and small dependence on model parameters (Bacciu et al., 2018).

4. Graph Fingerprints and Downstream Use

Upon completion of layer-wise training, the CGMM infers hard hidden-state assignments for each vertex at each layer. These assignments are aggregated to construct a fixed-size graph fingerprint:

z(g)=[#{u:qu(1)=1},,#{u:qu(L)=C}]RLC\mathbf{z}(\mathbf{g}) = \bigl[ \#\{u: q_u^{(1)} = 1\}, \dots, \#\{u: q_u^{(L)} = C\} \bigr] \in \mathbb{R}^{L \cdot C}

This vector summarization enables the application of standard discriminative learners, such as SVMs with RBF or string kernels, to predict graph labels or perform regression (Bacciu et al., 2018).

5. Theoretical Properties and Expressiveness

CGMM is stationary in that a single shared parameter set is used across vertices of all input graphs. Its construction guarantees cycle-free inference, given the strict dependence on frozen previous-layer statistics. The depth LL of the model mediates the expressivity: an LL-layer model captures up to LL-hop contextual dependencies, and in the large-LL regime, CGMM can distinguish patterns in the same regime as LL-layer message-passing GNNs, but within a generative modeling formalism. Training and inference both admit linear complexity in dataset size, controlled by small constants determined by CC, AA, and LL (Bacciu et al., 2018).

6. Empirical Results

Benchmark evaluations demonstrate CGMM's efficacy for both tree and general graph classification tasks:

  • On the INEX2005 tree classification benchmark, a 4-layer CGMM with C{20,40}C \in \{20,40\} achieved approximately 96.7% test accuracy, compared to 97.0% for the PT-kernel.
  • On MUTAG, CPDB, and AIDS graph benchmarks, CGMM (depth up to 8, C{20,40}C \in \{20,40\}, with/without pooling) matched or outperformed prior models: 81.0% accuracy on CPDB, 84.2% on AIDS, and 91.2% on MUTAG.
  • Depth ablation studies found performance improves through layers 3–5, then saturates or slightly drops for very large LL, indicating saturated contextual propagation and possible overfitting (Bacciu et al., 2018).

7. Contextual Graph Markov Models in Structure Learning

Contextual Markov Networks (CMNs), a related framework, generalize Markov networks by encoding context-specific independences (CSIs), where pairwise dependencies can be switched off in certain neighbor-assignment contexts. CMNs are formally represented as (G,C)(G, \mathcal{C})—a graph and a set of edge-contexts.

Structure learning in CMNs leverages marginal pseudo-likelihood (MPL) scoring, supporting efficient model selection without the chordality assumption. The CMN MPL is a product over context-merged neighborhoods:

MPL(G,C)=t=1nj=1dP(xj,txmb(j),t,context-class(j,t))\boxed{ \mathrm{MPL}(G, \mathcal{C}) = \prod_{t=1}^n \prod_{j=1}^d P(x_{j, t} \mid x_{mb(j), t}, \text{context-class}(j, t)) }

with closed-form Dirichlet marginalization. A two-level greedy hill climb algorithm alternates between updating the graph and optimizing local contexts (edge assignments), penalizing overly rich context-sets via a prior parameter κ\kappa. This framework is proven statistically consistent: maximizing MPL recovers the true CMN structure asymptotically. Empirical experiments on synthetic and real-world discrete datasets confirm that CMNs achieve highest likelihood and out-of-sample predictive accuracy, using fewer parameters than stratified graph models and outperforming ordinary Markov networks (Pensar et al., 2021).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Contextual Graph Markov Model (CGMM).