Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stable Training of Probabilistic Models Using the Leave-One-Out Maximum Log-Likelihood Objective (2310.03556v2)

Published 5 Oct 2023 in stat.ML, cs.LG, cs.SY, and eess.SY

Abstract: Probabilistic modelling of power systems operation and planning processes depends on data-driven methods, which require sufficiently large datasets. When historical data lacks this, it is desired to model the underlying data generation mechanism as a probability distribution to assess the data quality and generate more data, if needed. Kernel density estimation (KDE) based models are popular choices for this task, but they fail to adapt to data regions with varying densities. In this paper, an adaptive KDE model is employed to circumvent this, where each kernel in the model has an individual bandwidth. The leave-one-out maximum log-likelihood (LOO-MLL) criterion is proposed to prevent the singular solutions that the regular MLL criterion gives rise to, and it is proven that LOO-MLL prevents these. Relying on this guaranteed robustness, the model is extended by adjustable weights for the kernels. In addition, a modified expectation-maximization algorithm is employed to accelerate the optimization speed reliably. The performance of the proposed method and models are exhibited on two power systems datasets using different statistical tests and by comparison with Gaussian mixture models. Results show that the proposed models have promising performance, in addition to their singularity prevention guarantees.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. I. Konstantelos, M. Sun, S. H. Tindemans, S. Issad, P. Panciatici, and G. Strbac, “Using vine copulas to generate representative system states for machine learning,” IEEE Transactions on Power Systems, vol. 34, no. 1, pp. 225–235, 2018.
  2. C. Wang, S. H. Tindemans, and P. Palensky, “Improved anomaly detection and localization using whitening-enhanced autoencoders,” IEEE Transactions on Industrial Informatics, 2023.
  3. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  4. C. van der Walt and E. Barnard, “Variable kernel density estimation in high-dimensional feature spaces,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
  5. C. Meehan, K. Chaudhuri, and S. Dasgupta, “A non-parametric test to detect data-copying in generative models,” in International Conference on Artificial Intelligence and Statistics, 2020.
  6. A. McIntosh, “The jackknife estimation method,” arXiv preprint arXiv:1606.00497, 2016.
  7. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  8. A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, “A kernel two-sample test,” The Journal of Machine Learning Research, vol. 13, no. 1, pp. 723–773, 2012.
  9. G. J. Székely and M. L. Rizzo, “Energy statistics: A class of statistics based on distances,” Journal of statistical planning and inference, vol. 143, no. 8, pp. 1249–1272, 2013.
  10. C. Wang, E. Sharifnia, Z. Gao, S. H. Tindemans, and P. Palensky, “Generating multivariate load states using a conditional variational autoencoder,” Electric Power Systems Research, vol. 213, p. 108603, 2022.
  11. F. J. Massey Jr, “The kolmogorov-smirnov test for goodness of fit,” Journal of the American statistical Association, vol. 46, no. 253, pp. 68–78, 1951.
  12. T. W. Anderson, “On the distribution of the two-sample cramer-von mises criterion,” The Annals of Mathematical Statistics, pp. 1148–1159, 1962.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com