Zero-Inflated Continuous Optimization (ZICO)
- Zero-Inflated Continuous Optimization (ZICO) is a framework for learning DAG structures from zero-inflated count data using specialized ZI-GLMs.
- It integrates sparsity regularization and a differentiable acyclicity constraint to effectively distinguish structural zeros from sampling zeros.
- Empirical results on simulated networks and transcriptomics data show ZICO achieves faster, more accurate causal structure recovery than standard methods.
Zero-Inflated Continuous Optimization (ZICO) is a framework for learning the structure of directed acyclic graphs (DAGs) from zero-inflated count data. ZICO formulates the structure learning problem as a smooth, constrained optimization involving node-wise zero-inflated generalized linear models (ZI-GLMs), sparsity regularization, and a differentiable acyclicity constraint. The method is designed to distinguish structural zeros from sampling zeros—a critical challenge in contexts such as gene regulatory network inference, single-cell transcriptomics, and other domains in which excess zeros occur due to underlying biological or measurement processes. ZICO enables scalable and accurate recovery of causal structures in settings where standard methods fail to model zero inflation effectively (Sato et al., 18 Dec 2025).
1. Problem Formulation and Motivation
The essential input is an data matrix,
characterized by high rates of exact zeros (“zero-inflation”). Standard DAG learning procedures—such as NOTEARS, greedy equivalence search (GES), or SCORE—are ill-equipped for zero-inflated settings, typically assuming continuous or unadjusted count models (e.g., Poisson), which do not distinguish between structural zeros (from explicit zero-inflation mechanisms) and sampling zeros. This leads to systematically biased edge scores and impaired structure recovery.
The goal is to infer a weighted adjacency matrix , where the entry encodes the directed influence from node to node , under a DAG constraint enforced on .
2. Zero-Inflated Generalized Linear Models
For each node , ZICO models conditional distributions using a two-component ZI-GLM mixture. Specialized forms are as follows:
2.1 Zero-Inflated Poisson (ZIP)
For a sample , node : where
The logit link parametrizes structural zeros, and the log link models the mean of the count component.
2.2 Zero-Inflated Negative Binomial (ZINB)
Adds dispersion parameter :
Each node’s parameters are collectively .
3. Smooth Score-Based Objective and Regularization
The aggregate (negative) log-likelihood is: with node-specific coefficients, where and similarly for .
To induce sparsity, ZICO incorporates an or group- penalty: or for elementwise sparsity, .
The smooth objective optimizes:
4. Differentiable Acyclicity Constraints
ZICO enforces global DAG structure via a differentiable surrogate constraint: where denotes elementwise squaring; this function vanishes if and only if the directed graph with adjacency has no directed cycles.
The method requires both graphs implied by and to be acyclic, ensuring a proper DAG structure for each set of weights.
5. Constrained Optimization Framework
The core optimization problem is: This is solved using an augmented Lagrangian or penalty functional, e.g.,
with central-path style updates over the dual variable and penalty , alternating with gradient-based steps (AdamW) and mini-batch likelihood evaluation.
Regularization scheduling includes cosine-annealing for (promotes gradual sparsification) and decay of to adjust constraint enforcement dynamically. After each gradient step, proximal (soft-thresholding) operations are applied for elementwise penalization. Convergence is determined by primal feasibility (), dual residuals, and objective change criteria.
6. Theoretical Properties and Computational Complexity
Under standard smoothness assumptions, limit points of the augmented Lagrangian approach correspond to Karush–Kuhn–Tucker points for the original constrained problem. Local convergence rates can be derived under standard assumptions on the Hessian of .
The computational cost of a gradient step is , where is the batch size; for , the cubic term is negligible. The algorithm scales linearly in by mini-batching and can achieve linear or even sublinear complexity in via exploitation of sparsity and low-rank approximations for the acyclicity constraint.
7. Empirical Evaluation and Applications
Empirical studies demonstrate ZICO’s advantages in structure recovery from zero-inflated count data. Principal findings include:
- Simulated ER networks (, ): ZICO (with ZINB) yields the lowest structural Hamming distance (SHD) and structural intervention distance (SID) among ten compared methods. For , TPR ≈ 0.78, FDR ≈ 0.41, SHD ≈ 180, in approximately 55 s, whereas ZiGDAG reports SHD ≈ 283 in ≈ 18,000 s.
- Barabási–Albert graphs: ZICO–ZINB achieves SHD ≈ 74, SID ≈ 1247 in ≈ 49 s, outperforming NOTEARS (Poisson variant) and greedy search approaches.
- Single-cell transcriptomics (scMultiSim): ZICO (ZIP/ZINB) achieves AUPRC ratios up to three times random baseline, comparable to or outperforming GENIE3, SINCERITIES, LEAP, NOTEARS, and GRNBoost2, with explicit acyclicity enforcement.
- Dropout settings: ZICO–ZIP/ZINB outperforms pure Poisson/NB models, affirming the necessity of explicit zero-inflation modeling under nontrivial zero-generating mechanisms.
ZICO provides efficient, vectorized, and mini-batched learning for large-scale settings, making it suitable for reverse engineering gene regulatory networks and broader contexts with zero-inflated counts. The method recovers zero-inflated causal structures with improved accuracy and one to two orders of magnitude faster than greedy search or MCMC-based alternatives (Sato et al., 18 Dec 2025).