Papers
Topics
Authors
Recent
2000 character limit reached

Neural SEMs

Updated 7 December 2025
  • Neural Structural Equation Models (SEMs) are methodologies that fuse classical SEM with modern neural architectures to capture causal relationships and manage latent variables.
  • They employ adversarial min-max optimization and masked feedforward networks, enabling efficient parameter learning under complex constraints.
  • Theoretical guarantees include proven convergence rates and identifiability conditions, with practical validations in simulations and fMRI connectivity studies.

Neural Structural Equation Models (Neural SEMs) are a class of methodologies that integrate the structural modeling traditions of econometrics and causal inference with modern neural network architectures. These models address key inferential and computational challenges in structural equation modeling (SEM)—especially in settings involving latent variables, nonlinearity, and high-dimensional data—by leveraging the expressive and optimization advantages of neural networks. The resulting frameworks capture causal relationships among observed and sometimes latent variables, with parameter learning and identifiability rates informed by the statistical and neural learning theory.

1. Mathematical Formulation of Neural SEMs

Neural SEMs generalize classical linear and nonlinear SEMs by parameterizing the structural functions or mappings via neural networks. In the most general setting, the latent variable causal model is encoded by a functional equation and a graphical structure. Let G=(V,L,)G=(V, L, \rightarrow) denote a directed acyclic graph with visible variables VV, latent variables LL, and arrows AA, and consider the canonical SEM form

yi(t)=jiλijyj(t)+ei(t)y_i(t) = \sum_{j\neq i} \lambda_{ij} y_j(t) + e_i(t)

in simple linear-Gaussian SEMs, or more generally

v=xvwxvx+εvv = \sum_{x\rightarrow v} w_{xv}\,x + \varepsilon_v

for each vVv \in V, with Gaussian noise εv\varepsilon_v and independent Gaussian latents. To embed such SEMs in a neural network regime, one seeks to solve operator equations of the form

Af=b,(Af)(x2)=E[f(X1)X2=x2],fL2(X1)A f = b, \qquad (Af)(x_{2}) = \mathbb{E}[f(X_1) \mid X_2=x_2], \quad f \in L^2(X_1)

where both ff (the structural mechanism) and uu (a dual function) are parameterized by neural networks, and their estimation reduces to an adversarial min-max game (Liao et al., 2020).

For SEMs with latent variables and marginalization, parametric marginalized DAGs (pmDAGs) serve as the graphical structure for Gaussian models. Optimization proceeds by matching the model-implied and empirical covariance via a feedforward network, with weight masking to encode the graph topology (Saremi, 2023). For general (possibly nonlinear or non-Gaussian) SEMs, this neural embedding admits greater expressive flexibility but typically at a cost to identifiability and tractability.

2. Parameter Estimation via Neural Architectures

Parameter inference in Neural SEMs is realized by training neural networks to satisfy the moment, likelihood, or operator constraints implied by the SEM structure. The adversarial framework (Liao et al., 2020) frames estimation as a saddle-point problem: minfmaxuE[(f(X1)b(X2))u(X2)+α2f(X1)212u(X2)2]\min_{f} \max_{u} \mathbb{E}\left[(f(X_1) - b(X_2))u(X_2) + \frac{\alpha}{2} f(X_1)^2 - \frac{1}{2} u(X_2)^2\right] where ff and uu are chosen from overparameterized ReLU networks and optimized by stochastic projected primal-dual gradient descent.

Alternatively, for Gaussian pmDAGs, the “SEMNAN Solver” employs a linear masked feedforward network with layers corresponding to the graph’s ancestral structure. The loss function is the negative log-likelihood or KL divergence between the sample and model-implied covariance matrices. Optimization proceeds via gradient-based updates over the masked weights, with the explicit mapping between the SEM parameters and neural weights preserved throughout (Saremi, 2023).

Typical neural architectures include:

  • Two-layer or multi-layer ReLU networks for operator-based adversarial formulations.
  • Strictly linear networks for the Gaussian pmDAG setting.
  • Masked connectivity patterns reflecting the original SEM (edge or hyperedge constraints).

3. Theoretical Guarantees and Statistical Properties

Neural SEMs admit convergence rates and consistency results under suitable overparameterization and regularity assumptions:

  • For adversarial neural estimation, suboptimality of the averaged estimator decreases as

E(fˉT)=O~(T1/2+m1/4)(two-layer),E(\bar f_T) = \tilde O(T^{-1/2} + m^{-1/4})\quad\text{(two-layer)},

E(fˉT)=O~(T1/2+m1/6)(deep networks)E(\bar f_T) = \tilde O(T^{-1/2} + m^{-1/6})\quad\text{(deep networks)}

where TT is the sample size and mm is the network width (Liao et al., 2020).

  • In the Gaussian pmDAG framework, as the neural network matches the covariance structure, the solution corresponds exactly to the SEM maximum likelihood estimator; identifiability and mapping between weights and SEM coefficients are preserved (Saremi, 2023).

Proof strategies leverage the neural tangent kernel (NTK) regime to employ local linearization, and mirror-descent techniques from online convex optimization. The analysis decomposes errors into approximation due to nonlinearity and optimization regret, both controllable under mild assumptions.

4. Identifiability and Causal Inference

A central theoretical issue in Neural SEMs is identifiability—the unique recovery of structural parameters or causal effects from observational data given the model structure. For linear-Gaussian SEMs/pmDAGs, several results hold:

  • Bow-free identifiability: If the model graph is acyclic with no “bow” structures (i.e., no node pair with both a directed and a bidirected edge), parameter recovery is almost everywhere unique, and the total causal effect is identified as an entry in (IWG)1(I-W_G)^{-1}.
  • Half-trek criterion: More general algebraic identifiability is given by verification of the half-trek criterion on the directed mixed graph.

Algorithmic meta-criteria (e.g., Tian–Pearl ID algorithm, regression-based adjustment for backdoor, or Cholesky decomposition in the bow-free case) provide practical means to certify identifiability in the linear-Gaussian setting (Saremi, 2023).

For nonlinear or non-Gaussian SEMs parameterized by deep networks, identifiability typically requires additional modeling constraints or is non-generic.

5. Extensions, Generalizations, and Limitations

Neural SEMs' core methodologies extend to a variety of settings:

  • Discrete and non-Gaussian SEMs: Linear activations are replaced with softmax (for discrete) or nonlinearities matching the target family; the loss is tailored to match the empirical and model-implied moments or sufficient statistics. Identifiability is generally more difficult, especially with discrete latent confounders.
  • Nonparametric SEMs and GAN-style adversarial learning: The operator equation framework extends naturally to cases where the structural functions are nonparametric and estimated via adversarial neural training, although identifiability may fail and optimization is challenging (Liao et al., 2020).
  • Effective connectivity in fMRI: Bayesian testing of compatibility between SEM-implied conditional independence and data can be performed via partial correlations and posterior predictive assessments at individual, link-wise, and global levels (Marrelec et al., 9 Sep 2024).

A summary of the limitations:

Limitation Context Source
Requires overparameterization Adversarial neural SEMs (Liao et al., 2020)
Lack of closed-form solutions Non-Gaussian/Nonlinear SEMs (Saremi, 2023)
Identifiability non-generic Discrete latent variables (Saremi, 2023)
Empirical validation lacking Adversarial neural SEMs (Liao et al., 2020)

6. Empirical Validation and Applications

Empirical studies utilizing neural SEMs encompass both simulated and real datasets:

  • Simulation for model discrimination: Bayesian multilevel testing in fMRI effective connectivity distinguishes between competing SEMs with high sensitivity and specificity, as evidenced by the percentiles of posterior pp-values and rates of significant constraint violations under various generative models (Marrelec et al., 9 Sep 2024).
  • Reanalysis of fMRI data: Application to physiological data shows that certain missing links in an anatomical model are robustly contradicted by data, while a data-driven best-fit graph is globally compatible.

Explicit empirical benchmarks and results for fully adversarial neural SEMs have been identified as an open area; prior work focuses predominantly on theoretical guarantees (Liao et al., 2020).

7. Future Directions and Open Challenges

Progress in neural structural equation modeling prompts several research questions:

  • Robust identification and statistical inference for deep, nonlinear, and non-Gaussian SEMs using adversarial networks;
  • Practical algorithms for identifiability certification in settings with complex latent confounding;
  • Efficient training methods in high-dimensional or mixed-data regimes, balancing expressivity, identifiability, and tractability;
  • Extensions to broader classes of exponential family SEMs or full nonparametric settings leveraging neural architectures;
  • Empirical evaluation on diverse domains, particularly for policy-relevant causal effects.

Generalization beyond Gaussian distributions requires further theoretical advances for both estimation consistency and identifiability, given the lack of convexity and closed-form covariance propagation inherent to the neural framework (Saremi, 2023). The nonparametric SEM setting, and adversarial approaches in particular, constitute a major open frontier.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Neural Structural Equation Models.