Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LOCAL: Learning with Orientation Matrix to Infer Causal Structure from Time Series Data (2410.19464v4)

Published 25 Oct 2024 in cs.LG, cs.AI, and stat.ML

Abstract: Discovering the underlying Directed Acyclic Graph (DAG) from time series observational data is highly challenging due to the dynamic nature and complex nonlinear interactions between variables. Existing methods typically search for the optimal DAG by optimizing an objective function but face scalability challenges, as their computational demands grow exponentially with the dimensional expansion of variables. To this end, we propose LOCAL, a highly efficient, easy-to-implement, and constraint-free method for recovering dynamic causal structures. LOCAL is the first attempt to formulate a quasi-maximum likelihood-based score function for learning the dynamic DAG equivalent to the ground truth. Building on this, we introduce two adaptive modules that enhance the algebraic characterization of acyclicity: Asymptotic Causal Mask Learning (ACML) and Dynamic Graph Parameter Learning (DGPL). ACML constructs causal masks using learnable priority vectors and the Gumbel-Sigmoid function, ensuring DAG formation while optimizing computational efficiency. DGPL transforms causal learning into decomposed matrix products, capturing dynamic causal structure in high-dimensional data and improving interpretability. Extensive experiments on synthetic and real-world datasets demonstrate that LOCAL significantly outperforms existing methods and highlight LOCAL's potential as a robust and efficient method for dynamic causal discovery.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. BayesDAG: Gradient-Based Posterior Inference for Causal Discovery. In Advances in Neural Information Processing Systems. 1738–1763.
  2. EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 10017–10029.
  3. DAGMA: Learning DAGs via M-matrices and a Log-Determinant Acyclicity Characterization. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.). 8226–8239.
  4. Neural graphical modelling in continuous-time: consistency guarantees and algorithms. In International Conference on Learning Representations. https://openreview.net/forum?id=SsHBkfeRF9L
  5. Latent Convergent Cross Mapping. In International Conference on Learning Representations. https://openreview.net/forum?id=4TSiOTkKe5P
  6. A Causal View for Item-level Effect of Recommendation on User Preference. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining (WSDM ’23). Association for Computing Machinery, 240–248.
  7. Recovering Linear Causal Models with Latent Variables via Cholesky Factorization of Covariance Matrix. arXiv:2311.00674 [stat.ML]
  8. Differentiable DAG Sampling. In International Conference on Learning Representations.
  9. Reweighted Low-Rank Factorization With Deep Prior for Image Restoration. IEEE Transactions on Signal Processing 70 (2022), 3514–3529.
  10. Causal structure learning for high-dimensional non-stationary time series. Knowledge-Based Systems 295 (2024), 111868.
  11. CUTS+: High-Dimensional Causal Discovery from Irregular Time-Series. Proceedings of the AAAI Conference on Artificial Intelligence 38, 10 (Mar. 2024), 11525–11533.
  12. CausalTime: Realistically Generated Time-series for Benchmarking of Causal Discovery. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=iad1yyyGme
  13. CUTS: Neural Causal Discovery from Irregular Time-Series Data. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=UG8bQcD3Emv
  14. Shuyu Dong and Michèle Sebag. 2023. From Graphs to DAGs: A Low-Complexity Model and a Scalable Algorithm. In Machine Learning and Knowledge Discovery in Databases. 107–122.
  15. Directed Acyclic Graph Structure Learning from Dynamic Graphs. In the AAAI Conference on Artificial Intelligence. 7512–7521.
  16. On Low-Rank Directed Acyclic Graphs and Causal Structure Learning. IEEE Transactions on Neural Networks and Learning Systems 35, 4 (2024), 4924–4937.
  17. CDANs: Temporal Causal Discovery from Autocorrelated and Non-Stationary Time Series Data. In Proceedings of the 8th Machine Learning for Healthcare Conference. 186–207.
  18. IDYNO: Learning Nonparametric DAGs from Interventional Dynamic Data. In the International Conference on Machine Learning. PMLR, 6988–7001.
  19. CausalMMM: Learning Causal Structure for Marketing Mix Modeling. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining. 238–246.
  20. Rhino: Deep Causal Temporal Relationship Learning with History-dependent Noise. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=i_1rbq8yFWC
  21. Clive WJ Granger. 1969. Investigating causal relations by econometric models and cross-spectral methods. Econometrica: journal of the Econometric Society (1969), 424–438.
  22. LoRA: Low-Rank Adaptation of Large Language Models. In the International Conference on Learning Representations.
  23. Saurabh Khanna and Vincent Y. F. Tan. 2020. Economy Statistical Recurrent Units For Inferring Nonlinear Granger Causality. In International Conference on Learning Representations. https://openreview.net/forum?id=SyxV9ANFDH
  24. Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In the International Conference on Learning Representations.
  25. Causal discovery from observational and interventional data across multiple environments. In Advances in Neural Information Processing Systems. 16942–16956.
  26. Causal Discovery in Temporal Domain from Interventional Data. In the International Conference on Information and Knowledge Management. 4074–4078.
  27. LLM-Enhanced Causal Discovery in Temporal Domain from Interventional Data. arXiv:2404.14786 [cs.AI]
  28. Constraint-Free Structure Learning with Smooth Acyclic Orientations. In the International Conference on Learning Representations.
  29. David Maxwell Chickering and David Heckerman. 1997. Efficient approximations for the marginal likelihood of Bayesian networks with hidden variables. Machine learning 29 (1997), 181–212.
  30. Learning DAGs from Data with Few Root Causes. In Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.). 16865–16888.
  31. Causal Discovery with Attention-Based Convolutional Neural Networks. Machine Learning and Knowledge Extraction 1, 1 (2019), 312–340. https://doi.org/10.3390/make1010019
  32. On the Role of Sparsity and DAG Constraints for Learning Linear DAGs. In Advances in Neural Information Processing Systems, Vol. 33. 17943–17954.
  33. Masked gradient-based causal structure learning. In the SIAM International Conference on Data Mining. 424–432.
  34. DYNOTEARS: Structure Learning from Time-Series Data. In International Conference on Artificial Intelligence and Statistics. PMLR, 1595–1605.
  35. Causal Inference on Time Series using Restricted Structural Equation Models. In Advances in Neural Information Processing Systems, Vol. 26.
  36. Network modelling methods for FMRI. Neuroimage 54, 2 (2011), 875–891.
  37. NTS-NOTEARS: Learning Nonparametric DBNs With Prior Knowledge. In International Conference on Artificial Intelligence and Statistics. PMLR, 1942–1964.
  38. Neural granger causality. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 8 (2021), 4267–4279.
  39. Scalable Causal Graph Learning through a Deep Neural Network. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (Beijing, China) (CIKM ’19). 1853–1862.
  40. DAG-GNN: DAG Structure Learning with Graph Neural Networks. In Proceedings of the 36th International Conference on Machine Learning. 7154–7163.
  41. DAGs with No Curl: An Efficient DAG Structure Learning Approach. In the International Conference on Machine Learning, Vol. 139. PMLR, 12156–12166.
  42. Deep Dag Learning of Effective Brain Connectivity for FMRI Analysis. In 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). 1–5.
  43. DAGs with NO TEARS: Continuous Optimization for Structure Learning. In Advances in Neural Information Processing Systems.
  44. Jacobian Regularizer-based Neural Granger Causality. In Forty-first International Conference on Machine Learning. https://openreview.net/forum?id=FG5hjRBtpm
  45. Multivariate spatial autoregressive model for large scale social networks. Journal of Econometrics 215, 2 (2020), 591–606.

Summary

  • The paper introduces a quasi-maximum likelihood framework that enhances the accuracy of causal inference in complex time series data.
  • It integrates adaptive modules, including ACML and DGPL, to efficiently construct Directed Acyclic Graphs without heavy computational constraints.
  • Experimental results show significant gains in TPR, SHD, and F1 scores, outperforming state-of-the-art methods on both synthetic and real datasets.

Overview of "LOCAL: Learning with Orientation Matrix to Infer Causal Structure from Time Series Data"

Introduction and Motivation

The paper introduces a novel method called LOCAL for inferring causal structures from time series data. This approach addresses significant challenges associated with discovering Directed Acyclic Graphs (DAGs) from observational data, particularly in dynamic and high-dimensional systems. Existing methods often struggle with efficiency and scalability. The authors propose LOCAL to mitigate these issues by employing a quasi-maximum likelihood-based score function and adaptive modules.

Key Contributions

  1. Quasi-Maximum Likelihood-Based Objective: LOCAL introduces a quasi-likelihood approach to estimate dynamic causal structures. This method enhances the robustness and accuracy of causal inference compared to previous approaches that heavily rely on complex constraints and computationally expensive matrix operations.
  2. Adaptive Modules for Improved Causal Discovery:
    • Asymptotic Causal Mask Learning (ACML): This module uses learnable priority vectors and the Gumbel-Sigmoid function to facilitate the efficient creation of DAGs, bypassing the need for hard constraints.
    • Dynamic Graph Parameter Learning (DGPL): This module breaks down causal learning into decomposed matrix multiplications, enhancing the interpretability of causal relationships in high-dimensional data.
  3. Efficiency and Performance: The LOCAL framework demonstrates significant improvements in efficiency, requiring substantially less computational time while achieving superior accuracy on synthetic and real datasets.

Experimental Verification

The paper validates LOCAL's effectiveness through extensive experiments:

  • Synthetic Data: LOCAL significantly outperforms traditional methods like DYNOTEARS and its extensions, especially as the dimensionality of the data increases. The experiments reveal considerable gains in True Positive Rate (TPR), Structural Hamming Distance (SHD), and F1 scores.
  • NetSim and CausalTime Datasets: The method demonstrates strong performance in capturing complex nonlinear causal relationships, surpassing state-of-the-art methods in both AUROC and AUPRC metrics.

Implications and Future Work

The research indicates that LOCAL's innovative approach to architectural design and objective formulation allows for improved scalability and accuracy in dynamic causal discovery. This method can be pivotal for applications requiring efficient causal inference without the computational costs associated with traditional methods.

Future work may involve exploring additional neural architectures, such as attention-based mechanisms, to further enhance LOCAL's capabilities and extend its applicability to even more complex causal structures.

Conclusion

LOCAL presents a promising advancement in the field of causal discovery from time series data, offering a robust and efficient solution to longstanding challenges. Its integration of quasi-maximum likelihood estimation with novel adaptive modules establishes a new paradigm for rapid and accurate causal inference in high-dimensional dynamic systems.

X Twitter Logo Streamline Icon: https://streamlinehq.com