DAG-Wishart Prior for Gaussian DAG Models
- DAG-Wishart prior is a conjugate prior for covariance matrices of Gaussian DAG models, featuring vertex-specific shape parameters and a flexible Cholesky parameterization.
- It generalizes earlier Wishart constructions by accommodating arbitrary DAGs, thereby enhancing Bayesian inference and model selection in both decomposable and non-decomposable graphs.
- Its strong hyper Markov properties enable closed-form marginal likelihoods and scalable structure learning, making it effective for high-dimensional network data.
The DAG-Wishart prior is a class of conjugate priors for the covariance and precision matrices of Gaussian graphical models Markov with respect to arbitrary directed acyclic graphs (DAGs). It provides a unifying, scalable, and flexible framework for Bayesian inference, model selection, and covariance estimation in Gaussian DAG models. By enabling a multi-parameter, vertex-specific formulation on the Cholesky factorization, the DAG-Wishart prior generalizes and extends earlier graphical Wishart constructions—especially to non-decomposable and high-dimensional settings.
1. Definition and Cholesky Parameterization
The DAG-Wishart prior arises from parameterizing the precision matrix of a -dimensional Gaussian variable as
where is a lower-triangular matrix with unit diagonal () and off-diagonal entries () set to zero whenever , with denoting the parent set of node under the DAG. is a positive diagonal matrix.
The DAG-Wishart prior on the "Cholesky space" is given by
with a positive-definite "scale" matrix and a vector of shape parameters. The normalizing constant has a closed form, provided that for each , (Ben-David et al., 2011).
By explicit change-of-variables and mapping with the appropriate Jacobians, this density induces corresponding priors on the spaces of incomplete precision matrices and incomplete covariance matrices , retaining the free parameters dictated by the DAG.
2. Generalization Beyond Decomposable Graphs
Earlier conjugate priors for Gaussian graphical models—such as the hyper inverse Wishart for undirected decomposable graphs [Dawid-Lauritzen, 1993], and the Letac-Massam multi-parameter Wishart for decomposable structures—either lacked vertex-specific flexibility or were not extensible to non-decomposable graphs. Previous DAG-based Wisharts (e.g., Geiger-Heckerman) often allowed only for a single shape parameter.
The DAG-Wishart prior fundamentally extends this landscape:
- It is defined for arbitrary DAGs, not just those corresponding to decomposable graphs.
- Each node can have its own shape parameter .
- The explicit Cholesky formulation allows correct imposition of the conditional independence relations, regardless of decomposability.
- The normalizing constant is available in closed form and does not require computational completion schemes (Ben-David et al., 2011, Ben-David et al., 2014).
This approach leads to a dramatic increase in the flexibility and practical applicability of Bayesian inference on DAG models, opening up a broader class of model structures and shrinkage regimes (Ben-David et al., 2014).
3. Hyper Markov Properties and Conjugacy
A defining feature of the DAG-Wishart prior is its strong hyper Markov property. Under this prior, the parameter blocks for are mutually independent:
This strong separation means that the prior respects, and enforces, the Markov properties encoded in the DAG—a property critical for efficient computation and tractable marginal likelihoods (Ben-David et al., 2011).
A major implication is posterior conjugacy: given independent observations from , the posterior is again a DAG-Wishart with updated hyperparameters , where is the empirical covariance.
Explicit conditional independence also ensures that marginal likelihoods, posterior moments, and other key Bayesian integrals remain tractable regardless of problem dimension.
4. Bayesian Model Selection Methodology
The DAG-Wishart prior enables a scalable, closed-form Bayesian model selection procedure over the space of DAG-Markov models with a given vertex ordering. The closed-form marginal likelihood is
which, due to conjugacy and hyper Markov properties, admits efficient computation.
The paper proposes the DAG-W algorithm, which includes:
- Generation of candidate DAGs via a Lasso–DAG regularization path.
- Exploration of the local neighborhood (graphs that differ by a single edge).
- Stochastic shotgun search using annealed log–posterior scores.
- Selection of the highest-scoring graph.
Empirical evaluation confirms that this strategy scales to high dimensions ( up to 2000), providing reliable graph estimation and covariance inference (Ben-David et al., 2011).
5. Numerical and Empirical Results
Extensive simulation and real data studies highlight key advantages:
- In high-dimensional settings ( to $2000$, edge sparsity ), DAG-Wishart–based Bayesian model selection outperforms Lasso–DAG in sensitivity without compromising specificity.
- Compared to maximum likelihood estimators, posterior mean and MAP estimators under the DAG-Wishart prior yield reduced error (by losses and Stein’s risk), with the improvement most pronounced in small sample regimes.
- Real data (e.g., protein signaling networks, call center data) analysis reveals that the method recovers known structure and discovers additional interpretable edges while lowering prediction error.
- Performance tables demonstrate scalability and adaptability across a spectrum of problem sizes and sparsity levels (Ben-David et al., 2011).
6. Theoretical Justification and Connections
Later work established that, under mild regularity and modularity assumptions, the normal-Wishart (DAG-Wishart) prior is the unique conjugate prior for Gaussian DAG models that satisfies complete model equivalence and global parameter independence (Geiger et al., 2013, Geiger et al., 2021). This is rooted in a block-independence property (Schur complement independence) of the Wishart distribution: for every possible block decomposition. This theoretical underpinning ensures parameter and likelihood modularity, consistency of model scoring across Markov equivalent structures, and tractable marginal likelihood factorization (Geiger et al., 2013, Geiger et al., 2021).
These results also justify a methodology in which, given a prior for one complete model, priors for all DAG models can be derived via coordinate transformation, greatly reducing the burden of prior specification in large graphical model spaces.
7. Applications and Implications
The DAG-Wishart prior framework has broad applicability:
- Genomics and systems biology: Flexible, vertex-specific shape parameters enable the modeling of heterogeneous network structures (e.g., gene or protein regulatory networks).
- Finance and signal processing: Efficient high-dimensional covariance estimation with sparsity constraints, essential for portfolio, risk, and filter models.
- Causal inference and time series: Enables explicit Bayesian updating and model selection in temporal or causal network structures, benefitting applications ranging from time-ordered Bayesian networks to dynamic graphical models.
- Algorithmic and computational advances: The strong hyper Markov and conjugacy properties facilitate Markov chain sampling schemes, scalable structure search algorithms, and explicit treatment of both decomposable and non-decomposable graphs.
Potential future developments include integration with non-Gaussian or mixed variable models, incorporation into scalable MCMC and variational frameworks, and refined strategies for prior elicitation and regularization in high-dimensional regimes (Ben-David et al., 2011, Geiger et al., 2013, Ben-David et al., 2014).
Overall, the DAG-Wishart prior introduces a theoretically grounded, computationally tractable, and empirically robust methodology for Bayesian analysis of Gaussian DAG models. By combining multi-parameter flexibility, conjugacy, and strong hyper Markov properties, it enables explicit, scalable inference and structure learning across a diverse array of complex networked data contexts.