Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 175 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Composite likelihood inference for the Poisson log-normal model (2402.14390v2)

Published 22 Feb 2024 in stat.CO and stat.ME

Abstract: Inferring parameters of a latent variable model can be a daunting task when the conditional distribution of the latent variables given the observed ones is intractable. Variational approaches prove to be computationally efficient but, possibly, lack theoretical guarantees on the estimates, while sampling based solutions are quite the opposite. Starting from already available variational approximations, we define a first Monte Carlo EM algorithm to obtain maximum likelihood estimators, focusing on the Poisson log-normal model which provides a generic framework for the analysis of multivariate count data. We then extend this algorithm to the case of a composite likelihood in order to be able to handle higher dimensional count data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Importance Sampling: Intrinsic Dimension and Computational Cost. Statistical Science, 32(3):405–431, 2017.
  2. J. Aitchison and C.H Ho. The multivariate Poisson-log normal distribution. Biometrika, 76(4):643–653, 1989.
  3. J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society: Series B (Methodological), 36(2):192–225, 1974.
  4. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017.
  5. Maximizing generalized linear mixed model likelihoods with an automated Monte Carlo EM algorithm. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(1):265–285, 1999.
  6. R. A. Boyles. On the Convergence of the EM Algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 45(1):47–50, 1983.
  7. S. Chatterjee and P. Diaconis. The sample size required in importance sampling. The Annals of Applied Probability, 28(2):1099–1135, 2018.
  8. Variational inference for probabilistic Poisson PCA. The Annals of Applied Statistics, 12(4):2674–2698, 2018.
  9. Variational inference for sparse network reconstruction from count data. In International Conference on Machine Learning, 2019.
  10. The Poisson-Lognormal Model as a Versatile Framework for the Joint Analysis of Species Abundances. Frontiers in Ecology and Evolution, 9:188, 2021. doi: 10.3389/fevo.2021.588292. URL https://www.frontiersin.org/article/10.3389/fevo.2021.588292.
  11. Adaptive multiple importance sampling. Scandinavian Journal of Statistics, 39(4):798–812, 2012.
  12. Infinite-dimensional gradient-based descent for alpha-divergence minimisation. The Annals of Statistics, 49(4):2250–2270, 2021.
  13. Monotonic Alpha-divergence Minimisation for Variational Inference. Journal of Machine Learning Research, 24(62):1–76, 2023.
  14. Maximum Likelihood from Incomplete Data via the EM algorithm. Journal of the Royal Statistical Society: Series B, 39:1–38, 1977.
  15. G. Fort and E. Moulines. Convergence of the Monte Carlo expectation maximization for curved exponential families. The Annals of Statistics, 31(4):1220 – 1259, 2003.
  16. Fish assemblages in the Barents Sea. Marine Biology Research, 2(4):260–269, 2006.
  17. T. Jaakkola. Advanced mean field methods: theory and practice, chapter Tutorial on variational approximation methods, pages 129–160. MIT Press, 2001.
  18. Bayesian parameter estimation via variational methods. Statistics and Computing, 10(1):25–37, 2000.
  19. A. Korba and F. Portier. Adaptive Importance Sampling meets Mirror Descent : a Bias-variance Tradeoff. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pages 11503–11527. PMLR, 2022.
  20. R. A. Levine and G. Casella. Implementations of the Monte Carlo EM Algorithm. Journal of Computational and Graphical Statistics, 10(3):422–439, 2001.
  21. B.G Lindsay. Composite likelihood methods. Contemporary mathematics, 80(1):221–239, 1988.
  22. T. A. Louis. Finding the observed information matrix when using the EM algorithm. Journal of the Royal Statistical Society. Series B, pages 226–233, 1982.
  23. C. E. McCulloch. Maximum Likelihood Algorithms for Generalized Linear Mixed Models. Journal of the American Statistical Association, 92(437):162–170, 1997.
  24. Unbiased Smoothing using Particle Independent Metropolis-Hastings. In Kamalika Chaudhuri and Masashi Sugiyama, editors, Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pages 2378–2387. PMLR, 2019.
  25. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2015. URL https://www.R-project.org/.
  26. A. van der Vaart. Asymptotic statistics, volume 27 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge Univ. Press, New York, 1998.
  27. An Overview of Composite Likelihood Methods. Statistica Sinica, 21:5–42, 2011.
  28. Graphical Models, Exponential Families, and Variational Inference. Found. Trends Mach. Learn., 1(1–2):1–305, 2008.
  29. A Monte Carlo Implementation of the EM Algorithm and the Poor Man’s Data Augmentation Algorithms. Journal of the American Statistical Association, 85(411):699–704, 1990.
  30. C.F Wu. On the convergence properties of the EM algorithm. The Annals of Statistics, 11(1):95–103, 1983.
  31. Note on information bias and efficiency of composite likelihood. Technical Report 1612.06967, arXiv, 2016.
  32. Y. Zhao and H. Joe. Composite likelihood estimation in multivariate data analysis. Canadian Journal of Statistics, 33(3):335–356, 2005.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 posts and received 0 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube