Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sum-Product Networks: A New Deep Architecture (1202.3732v1)

Published 14 Feb 2012 in cs.LG, cs.AI, and stat.ML

Abstract: The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs). SPNs are directed acyclic graphs with variables as leaves, sums and products as internal nodes, and weighted edges. We show that if an SPN is complete and consistent it represents the partition function and all marginals of some graphical model, and give semantics to its nodes. Essentially all tractable graphical models can be cast as SPNs, but SPNs are also strictly more general. We then propose learning algorithms for SPNs, based on backpropagation and EM. Experiments show that inference and learning with SPNs can be both faster and more accurate than with standard deep networks. For example, SPNs perform image completion better than state-of-the-art deep networks for this task. SPNs also have intriguing potential connections to the architecture of the cortex.

Citations (741)

Summary

  • The paper introduces SPNs as a novel deep probabilistic framework that achieves tractable, exact inference by representing complex distributions using sum and product nodes.
  • The paper demonstrates an efficient two-pass algorithm for computing marginal probabilities and MPE, significantly reducing computational complexity compared to traditional graphical models.
  • The paper validates SPNs through image completion experiments, showing that integrating hard EM and pruning techniques leads to faster training and improved accuracy.

This paper introduces Sum-Product Networks (SPNs), a novel deep probabilistic architecture designed to overcome the limitations of traditional graphical models, particularly the computational complexity of inference and learning associated with the partition function. SPNs offer a way to represent complex probability distributions where inference remains tractable.

Core Concepts of SPNs:

  • Structure: An SPN is a rooted Directed Acyclic Graph (DAG). The leaves of the graph represent indicator variables for the state of random variables (e.g., xix_i for Xi=1X_i=1 and xˉi\bar{x}_i for Xi=0X_i=0 in the Boolean case). Internal nodes are either weighted sum nodes or product nodes.
  • Evaluation: The value of a product node is the product of the values of its children. The value of a sum node ii is the weighted sum jCh(i)wijvj\sum_{j \in Ch(i)} w_{ij} v_j, where Ch(i)Ch(i) are the children of ii, vjv_j is the value of child jj, and wij0w_{ij} \ge 0 is the weight of the edge from ii to jj. The value of the SPN is the value computed at its root node.
  • Network Polynomial: SPNs compute a polynomial function of the indicator variables. This polynomial, when evaluated under specific indicator settings, can yield probabilities or marginals.
  • Validity: An SPN is considered valid if evaluating it with evidence ee (setting corresponding indicators to 1 and others to 0) directly yields the unnormalized probability ΦS(e)=xeS(x)\Phi_S(e) = \sum_{x \sim e} S(x). The paper provides sufficient conditions for validity:
    • Completeness: All children of the same sum node must have the same scope (i.e., involve the same set of variables).
    • Consistency: A variable cannot appear negated as an indicator leaf input to one child of a product node and non-negated as input to another child of the same product node.
  • Tractability: If an SPN is valid, the partition function ZSZ_S is simply the value of the SPN when all indicators are set to 1 (S()S(*)). Computing ZSZ_S or any marginal probability P(e)=S(e)/S()P(e) = S(e)/S(*) takes time linear in the size (number of edges) of the SPN. Theorem 2 states that if a distribution is representable by a polynomial-sized valid SPN, its partition function is tractable.
  • Decomposability: A stricter condition where the children of a product node have disjoint scopes. SPNs only require consistency, making them more general than models requiring decomposability (like arithmetic circuits, PCFGs, thin junction trees).

SPNs vs. Other Models:

  • Graphical Models: SPNs can represent some distributions (like uniform over even parity states) more compactly than traditional graphical models or mixture models. They naturally capture context-specific independence.
  • Deep Architectures (DBNs, DBMs): Unlike DBNs/DBMs which typically rely on approximate inference (like Gibbs sampling), valid SPNs allow for exact and efficient inference. SPNs explicitly model sums (mixtures) and products (features), whereas DBNs/DBMs often focus on feature hierarchies and approximate the sums.
  • Convolutional Networks: SPNs can be seen as a probabilistic generalization, with sum operations analogous to average-pooling and max operations (for MPE) analogous to max-pooling.
  • Arithmetic Circuits/AND-OR Graphs: SPNs add model semantics and learning algorithms to these related inference compilation structures.

Inference in SPNs:

  • Marginal Probabilities: Can be computed efficiently using a two-pass algorithm (similar to backpropagation). An upward pass computes the value of each node Si(e)S_i(e). A downward pass computes derivatives S(e)/Si(e)\partial S(e) / \partial S_i(e). Marginals for indicator variables P(Xi=te)P(X_i=t|e) and latent mixture variables P(Yk=je)P(Y_k=j|e) can be derived from these values. Time complexity is linear in SPN size.
  • Most Probable Explanation (MPE): Can be computed by replacing sum operations with max operations in the upward pass and tracing back the maximizing choices in the downward pass. This is exact for decomposable SPNs and extends to consistent SPNs.

Learning SPNs:

The paper proposes learning both structure and parameters, often starting with a dense, valid initial structure and then refining it.

  1. Structure Initialization (GenerateDenseSPN): Create an initial valid SPN. One strategy is to define nodes corresponding to subsets of variables (e.g., rectangular regions in an image) and create sum/product nodes based on ways to partition these subsets. Random selection of subsets/partitions is also possible.
  2. Weight Learning (UpdateWeights):
    • Gradient Descent: Use the efficient derivative computation (from marginal inference) to perform gradient ascent on log-likelihood. Requires projection/renormalization to keep weights valid (summing to 1 for children of sum nodes if desired). Prone to vanishing gradients in deep networks.
    • EM (Soft EM): Treat sum nodes as latent variables. The E-step computes posterior probabilities of latent variables (marginals P(Yk=je)P(Y_k=j|e)) using the inference algorithm. The M-step updates weights based on expected counts. Also suffers from diffusion issues.
    • Hard EM: Proposed as a solution to vanishing gradients/diffusion. Uses MPE inference instead of marginal inference in the E-step to find the single most likely configuration of latent variables. Updates counts only for the "winning" child of each sum node. M-step normalizes counts to get weights. This allows learning much deeper SPNs effectively.
  3. Pruning: After learning, edges with zero weight are pruned, simplifying the network.

Implementation Considerations & Experiments:

  • Architecture Example (Images): Use nodes for rectangular regions, decompose regions into subregions (potentially multi-resolution).
  • Continuous Variables: Handle by replacing leaf indicators with outputs of probability density functions (e.g., univariate Gaussians or mixtures). Sum nodes become integrals (though in practice, for evidence X=xX=x, the node value is p(x)p(x); otherwise it's 1). The experiments used Gaussian mixtures per pixel.
  • Learning Details: Used online hard EM with mini-batches, add-one smoothing, and an L0 prior (possible with hard EM) for sparsity.
  • Task: Image completion on Caltech-101 and Olivetti faces (occluding half the image).
  • Comparison: Compared against DBNs, DBMs, PCA, and Nearest Neighbor.
  • Results: SPNs significantly outperformed alternatives in terms of Mean Squared Error (MSE) on the completion task. They were also substantially faster to train (hours vs. days/weeks for DBNs/DBMs) and perform inference (sub-second exact inference vs. slow approximate inference). SPNs required less hyperparameter tuning and data preprocessing. The learned SPNs were very deep (36 layers). SPNs also showed strong performance on preliminary classification tasks.

Conclusion:

SPNs provide a powerful and tractable deep probabilistic modeling framework. Their key advantage lies in enabling efficient exact inference, which in turn facilitates more effective and faster learning compared to contemporary deep generative models like DBNs and DBMs, especially for deep structures using the proposed hard EM algorithm. The experiments demonstrated significant practical advantages in terms of speed, accuracy, and ease of use for tasks like image completion.