Papers
Topics
Authors
Recent
2000 character limit reached

Causal Adjacency Learning (CAL)

Updated 9 December 2025
  • Causal Adjacency Learning (CAL) is a framework that learns a DAG's binary adjacency matrix from data using statistical models and smooth relaxations.
  • It employs continuous masking, differentiable acyclicity constraints, and sparsity penalties to enable efficient, gradient-based estimation of causal structures.
  • Empirical evaluations demonstrate CAL's superior performance in reducing structural errors and boosting true positive rates compared to methods like NOTEARS.

Causal Adjacency Learning (CAL) is the process of learning the edge structure—that is, the adjacency matrix—of a causal graph, typically a directed acyclic graph (DAG), from observational or interventional data. This problem is foundational for causal discovery, as the adjacency encodes the direct causal influences between variables. Research in CAL combines statistical theory, differentiable optimization, graphical modeling, and advances in computational methodology to recover reliable causal structures from data, even in high-dimensional or challenging regimes.

1. Formal Definitions and Structural Framework

In CAL, the aim is to recover the binary adjacency matrix A∈{0,1}d×dA \in \{0,1\}^{d\times d} of a DAG G\mathcal{G} given data X={x(k)}k=1nX = \{x^{(k)}\}_{k=1}^n generated by a structural equation model (SEM). In the standard additive-noise SEM form,

Xi=fi(Xpa(i))+ϵiX_i = f_i(X_{pa(i)}) + \epsilon_i

where pa(i)pa(i) indexes the parents of node ii in G\mathcal{G}, ϵi\epsilon_i are independent noise variables, and the fif_i are non-constant in each argument to enforce causal minimality. The adjacency matrix Aji=1A_{ji}=1 if XjX_j is a parent of XiX_i, and Aji=0A_{ji}=0 otherwise. This adjacency may be binary or, in weighted variants, may be real-valued but with its support indicating the graph structure (Ng et al., 2019).

Learning the adjacency amounts to inferring for each pair (i,j)(i,j) whether there is a direct causal edge, Xj→XiX_j\to X_i, consistent with observed conditional independencies or the statistical properties of the data distribution.

2. Optimization and Smooth Relaxations

Direct optimization over discrete combinatorial objects (the entries of AA) is intractable for even moderately sized dd. CAL approaches thus rely on continuous relaxations and penalized objectives:

  • Continuous Masking: The binary adjacency matrix AA is parameterized via a continuous surrogate, such as A^ji=σ((Uji+ξji)/Ï„)\hat A_{ji} = \sigma\left( (U_{ji}+\xi_{ji})/\tau \right) where UU is an unconstrained "logit" matrix, ξji∼Logistic(0,1)\xi_{ji}\sim \text{Logistic}(0,1), σ(â‹…)\sigma(\cdot) is the sigmoid, and Ï„>0\tau>0 is a temperature (Ng et al., 2019). Small Ï„\tau ensures that A^ji\hat A_{ji} concentrates near {0,1}\{0,1\}.
  • Acyclicity Constraints: To ensure the learned structure is a DAG, a smooth functional such as h(A)=Tr(eA)−dh(A) = \text{Tr}\left(e^{A}\right) - d is employed. This differentiable constraint is zero if and only if AA corresponds to an acyclic graph (Ng et al., 2019). Penalties or augmented Lagrangian terms enforce (or nudge) acyclicity during training.
  • Sparsity Induction: An â„“1\ell_1 penalty λ∥A∥1\lambda\|A\|_1 promotes graph sparsity. Tuning λ\lambda controls the trade-off between data fit and model complexity.
  • Augmented Lagrangian: Optimization proceeds by alternately updating {U,Ï•}\{U,\phi\} (mask logits and SEM parameters), Lagrange multipliers, and acyclicity penalty weights (Ng et al., 2019). After convergence, the mean mask matrix M=σ(U/Ï„)M=\sigma(U/\tau) is thresholded to yield a discrete, guaranteed-acyclic adjacency estimate.

This relaxation–thresholding paradigm enables efficient, gradient-based learning and supports a variety of smooth SEM function classes.

3. Identifiability and Statistical Guarantees

The identifiability of the causal adjacency from observed data depends critically on assumptions about the SEM and noise.

  • ANM Identifiability: Under the "restricted additive noise model" (ANM) assumptions—independent noises with strictly positive density and nondegenerate (nonlinear) fif_i—the true DAG is identifiable from the joint distribution P(X)P(X) (Ng et al., 2019).
  • Supergraph Recovery: In the infinite data and correctly specified gig_i limit, the learned adjacency is guaranteed to contain the true DAG's edges (up to a supergraph). Extraneous edges can be subsequently pruned using statistical tests or post-processing such as CAM-pruning (Ng et al., 2019).
  • Consistency: Provided the loss optimization achieves global minima and model class is expressive enough, CAL procedures are consistent estimators of the underlying graph, assuming identifiability. Mild conditions (non-degeneracy, independence, and causal minimality) suffice.

4. Empirical Evaluation and Benchmarking

State-of-the-art CAL methods are validated via both synthetic and real-world data.

  • Synthetic DAGs: Experiments on ErdÅ‘s–Rényi random DAGs (e.g., d∈{10,20,50,100}d\in\{10,20,50,100\}, degree ∈{1,4}\in\{1,4\}) and various SEM types (Gaussian Process, quadratic, post-nonlinear) assess estimator performance via structural Hamming distance (SHD) and true positive rate (TPR) (Ng et al., 2019). CAL using smooth masking and acyclicity constraints (e.g., MCSL) yields SHD reductions of 20–50% relative to baselines such as NOTEARS, GraN-DAG, and DAG-GNN.
  • Real Networks: CAL matches or outperforms previous approaches in protein-signaling networks (e.g., Sachs data, SHD=12, best known) and telecom fault root-cause graphs (finding ≈80%\approx80\% of true causes vs ≤30%\leq 30\% for alternatives) (Ng et al., 2019).
  • Comparisons: Methods employing smooth adjacency masking, differentiable DAG constraints, and auxiliary pruning dominate classical greedy search and constraint-based algorithms, especially on nonlinear or moderately sized graphs.

5. Relation to Broader Causal Discovery Literature

CAL approaches build on and extend several key frameworks:

  • Note on Orientation vs. Adjacency: CAL focuses exclusively on learning the adjacency/skeleton (presence of edges), not necessarily their orientations when the faithfulness assumptions do not fully hold. Algorithms such as Conservative PC and constraint-based variants clarify what guarantees hold under Adjacency-Faithfulness vs. full Faithfulness (Ramsey et al., 2012).
  • Generalization Beyond SEMs: While CAL has been classically developed for additive-noise or linear SEMs, the masked-gradient structure extends naturally to deep or nonlinear SCMs (with gig_i parameterized as neural networks) (Ng et al., 2019).
  • Regularization and Auxiliary Losses: Recent work embeds CAL within predictive and autoencoding architectures, using auxiliary reconstruction and acyclicity penalties to steer representation learning toward causally faithful adjacencies (Kyono et al., 2020).

6. Limitations, Extensions, and Practical Considerations

  • Model Mis-Specification: When the SEM class gig_i does not match the true data-generating process, recovery guarantees may only hold up to the supergraph, and spurious edges may require dedicated pruning stages.
  • Thresholding and Discreteness: The final estimation of discrete adjacency requires hard thresholding; small values of the mask temperature Ï„\tau in the Gumbel-Softmax ensure masks are near binary, but threshold selection still affects sensitivity-specificity tradeoffs.
  • Scalability: The combination of differentiable masking and acyclicity constraints allows CAL methods to scale to moderate (d∼100d\sim 100) node regimes, but is still limited by computational costs in very high dimensions.
  • Post-Processing: CAM-pruning and similar post hoc strategies are often necessary to control for over-inclusion induced by regularization and to isolate the minimal true causal skeleton.

CAL thus comprises a robust approach to causal structure discovery, central to modern differentiable causal discovery pipelines, with well-understood identifiability conditions, empirical superiority to prior methods, and relevance to both theoretical and real-world applications (Ng et al., 2019).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Causal Adjacency Learning (CAL).