Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Validation of Gibbs Algorithms: Training Datasets, Test Datasets and their Aggregation (2306.12380v1)

Published 21 Jun 2023 in cs.LG, cs.IT, math.IT, math.PR, math.ST, and stat.TH

Abstract: The dependence on training data of the Gibbs algorithm (GA) is analytically characterized. By adopting the expected empirical risk as the performance metric, the sensitivity of the GA is obtained in closed form. In this case, sensitivity is the performance difference with respect to an arbitrary alternative algorithm. This description enables the development of explicit expressions involving the training errors and test errors of GAs trained with different datasets. Using these tools, dataset aggregation is studied and different figures of merit to evaluate the generalization capabilities of GAs are introduced. For particular sizes of such datasets and parameters of the GAs, a connection between Jeffrey's divergence, training and test errors is established.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. S. M. Perlaza, G. Bisson, I. Esnaola, A. Jean-Marie, and S. Rini, “Empirical risk minimization with generalized relative entropy regularization,” INRIA, Centre Inria d’Université Côte d’Azur, Sophia Antipolis, France, Tech. Rep. RR-9454, Feb. 2022.
  2. L. Zdeborová and F. Krzakala, “Statistical physics of inference: Thresholds and algorithms,” Advances in Physics, vol. 65, no. 5, pp. 453–552, Aug. 2016.
  3. P. Alquier, J. Ridgway, and N. Chopin, “On the properties of variational approximations of Gibbs posteriors,” Journal of Machine Learning Research, vol. 17, no. 1, pp. 8374–8414, Dec. 2016.
  4. G. Aminian, Y. Bu, L. Toni, M. Rodrigues, and G. Wornell, “An exact characterization of the generalization error for the Gibbs algorithm,” Advances in Neural information Processing Systems, vol. 34, pp. 8106–8118, Dec. 2021.
  5. T. Zhang, “From ϵitalic-ϵ\epsilonitalic_ϵ-entropy to KL-entropy: Analysis of minimum information complexity density estimation,” The Annals of Statistics, vol. 34, no. 5, pp. 2180–2210, 2006.
  6. ——, “Information-theoretic upper and lower bounds for statistical estimation,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1307–1321, Apr. 2006.
  7. J. Jiao, Y. Han, and T. Weissman, “Dependence measures bounding the exploration bias for general measurements,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, Jun. 2017, pp. 1475–1479.
  8. A. Xu and M. Raginsky, “Information-theoretic analysis of generalization capability of learning algorithms,” Advances in Neural information Processing Systems, Dec. 2017.
  9. H. Wang, M. Diaz, J. C. S. Santos Filho, and F. P. Calmon, “An information-theoretic view of generalization via Wasserstein distance,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Paris, France, Jul. 2019, pp. 577–581.
  10. I. Issa, A. R. Esposito, and M. Gastpar, “Strengthened information-theoretic bounds on the generalization error,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Paris, France, Jul. 2019, pp. 582–586.
  11. D. Russo and J. Zou, “How much does your data exploration overfit? Controlling bias via information usage,” IEEE Transactions on Information Theory, vol. 66, no. 1, pp. 302–323, Jan. 2019.
  12. Y. Bu, S. Zou, and V. V. Veeravalli, “Tightening mutual information-based bounds on generalization error,” IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 1, pp. 121–130, 2020.
  13. A. Asadi, E. Abbe, and S. Verdú, “Chaining mutual information and tightening generalization bounds,” Advances in Neural information Processing Systems, pp. 7245–7254, Dec. 2018.
  14. A. T. Lopez and V. Jog, “Generalization error bounds using Wasserstein distances,” in Proceedings of the IEEE Information Theory Workshop (ITW), Guangzhou, China, Nov. 2018, pp. 1–5.
  15. A. R. Asadi and E. Abbe, “Chaining meets chain rule: Multilevel entropic regularization and training of neural networks.” Journal of Machine Learning Research, vol. 21, pp. 139–1, 2020.
  16. H. Hafez-Kolahi, Z. Golgooni, S. Kasaei, and M. Soleymani, “Conditioning and processing: Techniques to improve information-theoretic generalization bounds,” Advances in Neural information Processing Systems, pp. 16 457–16 467, Dec. 2020.
  17. M. Haghifam, J. Negrea, A. Khisti, D. M. Roy, and G. K. Dziugaite, “Sharpened generalization bounds based on conditional mutual information and an application to noisy, iterative algorithms,” Advances in Neural information Processing Systems, pp. 9925–9935, Dec. 2018.
  18. B. Rodríguez Gálvez, G. Bassi, R. Thobaben, and M. Skoglund, “Tighter expected generalization error bounds via Wasserstein distance,” Advances in Neural information Processing Systems, pp. 19 109–19 121, Dec. 2021.
  19. A. R. Esposito, M. Gastpar, and I. Issa, “Generalization error bounds via Rényi-, f-divergences and maximal leakage,” IEEE Transactions on Information Theory, vol. 67, no. 8, pp. 4986–5004, 2021.
  20. G. Aminian, L. Toni, and M. R. Rodrigues, “Jensen-Shannon information based characterization of the generalization error of learning algorithms,” in Proceedings of the IEEE Information Theory Workshop (ITW), Kanazawa, Japan, Oct. 2021, pp. 1–5.
  21. G. Aminian, Y. Bu, L. Toni, M. R. D. Rodrigues, and G. W. Wornell, “Information-theoretic characterizations of generalization error for the Gibbs algorithm,” ArXiv Preprint 2210.09864, 2022.
  22. G. Aminian, Y. Bu, G. W. Wornell, and M. R. Rodrigues, “Tighter expected generalization error bounds via convexity of information measures,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aalto, Finland, Jun. 2022, pp. 2481–2486.
  23. J. Shawe-Taylor and R. C. Williamson, “A PAC analysis of a Bayesian estimator,” in Proceedings of Tenth Annual Conference on Computational Learning Theory, July 1997, pp. 2–9.
  24. D. A. McAllester, “PAC-Bayesian stochastic model selection,” Machine Learning, vol. 51, no. 1, pp. 5–21, 2003.
  25. M. Haddouche, B. Guedj, O. Rivasplata, and J. Shawe-Taylor, “PAC-Bayes unleashed: Generalisation bounds with unbounded losses,” Entropy, vol. 23, no. 10, Oct. 2021.
  26. B. Guedj and L. Pujol, “Still no free lunches: The price to pay for tighter PAC-Bayes bounds,” Entropy, vol. 23, no. 11, Nov. 2021.
  27. S. M. Perlaza, G. Bisson, I. Esnaola, A. Jean-Marie, and S. Rini, “Empirical risk minimization with relative entropy regularization: Optimality and sensitivity,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, Jul. 2022.
  28. S. M. Perlaza, I. Esnaola, G. Bisson, and H. V. Poor, “Sensitivity of the Gibbs algorithm to data aggregation in supervised machine learning,” INRIA, Centre Inria d’Université Côte d’Azur, Sophia Antipolis, France, Research Report RR-9474, Jun. 2022.
  29. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Agüera y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, Florida, Apr. 2017, pp. 1273–1282.
  30. F. Daunas, I. Esnaola, S. M. Perlaza, and H. V. Poor, “Analysis of the relative entropy asymmetry in the regularization of empirical risk minimization,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Taipei, Taiwan, Jun. 2023.
  31. H. Jeffreys, “An invariant form for the prior probability in estimation problems,” Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, vol. 186, no. 1007, pp. 453–461, 1946.
Citations (16)

Summary

We haven't generated a summary for this paper yet.