Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probabilistic Contraction Analysis of Iterated Random Operators (1804.01195v6)

Published 4 Apr 2018 in math.PR, cs.LG, and math.OC

Abstract: In many branches of engineering, Banach contraction mapping theorem is employed to establish the convergence of certain deterministic algorithms. Randomized versions of these algorithms have been developed that have proved useful in data-driven problems. In a class of randomized algorithms, in each iteration, the contraction map is approximated with an operator that uses independent and identically distributed samples of certain random variables. This leads to iterated random operators acting on an initial point in a complete metric space, and it generates a Markov chain. In this paper, we develop a new stochastic dominance based proof technique, called probabilistic contraction analysis, for establishing the convergence in probability of Markov chains generated by such iterated random operators in certain limiting regime. The methods developed in this paper provides a general framework for understanding convergence of a wide variety of Monte Carlo methods in which contractive property is present. We apply the convergence result to conclude the convergence of fitted value iteration and fitted relative value iteration in continuous state and continuous action Markov decision problems as representative applications of the general framework developed here.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (62)
  1. C. D. Aliprantis and K. Border, Infinite Dimensional Analysis: A Hitchhiker’s Guide. Springer-Verlag Berlin Heidelberg, 2006.
  2. Athena scientific Belmont, MA, 1995.
  3. D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming. Athena Scientific, 1996.
  4. A. L. Almudevar, Approximate iterative algorithms. CRC Press, 2014.
  5. C. Kollman, K. Baggerly, D. Cox, and R. Picard, “Adaptive importance sampling on discrete Markov chains,” Annals of Applied Probability, pp. 391–412, 1999.
  6. P. Y. Desai, Adaptive Monte Carlo methods for solving eigenvalue problems. Stanford University, 2001.
  7. P. Y. Desai and P. W. Glynn, “A Markov chain perspective on adaptive Monte Carlo algorithms,” in Proceeding of the 2001 Winter Simulation Conference (Cat. No. 01CH37304), vol. 1, pp. 379–384, IEEE, 2001.
  8. T. I. Ahamed, V. S. Borkar, and S. Juneja, “Adaptive importance sampling technique for markov chains using stochastic approximation,” Operations Research, vol. 54, no. 3, pp. 489–504, 2006.
  9. J. Heng, A. N. Bishop, G. Deligiannidis, and A. Doucet, “Controlled sequential Monte Carlo,” The Annals of Statistics, vol. 48, no. 5, pp. 2904–2929, 2020.
  10. A. Boustati, O. D. Akyildiz, T. Damoulas, and A. Johansen, “Generalised Bayesian filtering via sequential Monte Carlo,” Advances in neural information processing systems, vol. 33, pp. 418–429, 2020.
  11. J. N. Tsitsiklis and B. Van Roy, “Analysis of temporal-difference learning with function approximation,” in Advances in neural information processing systems, pp. 1075–1081, 1997.
  12. D. P. Bertsekas and H. Yu, “Q-learning and enhanced policy iteration in discounted dynamic programming,” Mathematics of Operations Research, vol. 37, no. 1, pp. 66–94, 2012.
  13. P. Yu, W. B. Haskell, and H. Xu, “Approximate value iteration for risk-aware markov decision processes,” IEEE Transactions on Automatic Control, vol. 63, no. 9, pp. 3135–3142, 2018.
  14. W. B. Haskell, R. Jain, and D. Kalathil, “Empirical dynamic programming,” Mathematics of Operations Research, vol. 41, no. 2, pp. 402–429, 2016.
  15. L. E. Dubins and D. A. Freedman, “Invariant probabilities for certain Markov processes,” The Annals of Mathematical Statistics, vol. 37, no. 4, pp. 837–848, 1966.
  16. M. F. Barnsley and S. Demko, “Iterated function systems and the global construction of fractals,” Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences, vol. 399, no. 1817, pp. 243–275, 1985.
  17. M. F. Barnsley, J. H. Elton, and D. P. Hardin, “Recurrent iterated function systems,” Constructive approximation, vol. 5, no. 1, pp. 3–31, 1989.
  18. P. Diaconis and D. Freedman, “Iterated random functions,” SIAM review, vol. 41, no. 1, pp. 45–76, 1999.
  19. Ö. Stenflo, “A survey of average contractive iterated function systems,” Journal of Difference Equations and Applications, vol. 18, no. 8, pp. 1355–1380, 2012.
  20. Springer Science & Business Media, 2013.
  21. M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 2014.
  22. A. Gupta, R. Jain, and P. W. Glynn, “An empirical algorithm for relative value iteration for average-cost MDPs,” in Proc. of 54th IEEE Conference on Decision and Control (CDC), pp. 5079–5084, Dec 2015.
  23. S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability. Cambridge University Press, 2009.
  24. A.-A. Borovkov and V. Yurinsky, Ergodicity and Stability of Stochastic Processes. John Wiley & Sons, Ltd., 1998.
  25. Springer, 2018.
  26. V. S. Borkar and S. P. Meyn, “The ODE method for convergence of stochastic approximation and reinforcement learning,” SIAM Journal on Control and Optimization, vol. 38, no. 2, pp. 447–469, 2000.
  27. Springer, 2009.
  28. J. Huang, I. Kontoyiannis, and S. P. Meyn, “The ODE method and spectral theory of Markov operators,” in Stochastic Theory and Control, pp. 205–221, Springer, 2002.
  29. Springer Science & Business Media, 2003.
  30. A. A. Kulkarni and V. S. Borkar, “Finite dimensional approximation and newton-based algorithm for stochastic approximation in Hilbert space,” Automatica, vol. 45, no. 12, pp. 2815–2822, 2009.
  31. A. Dieuleveut and F. Bach, “Nonparametric stochastic approximation with large step-sizes,” The Annals of Statistics, vol. 44, no. 4, pp. 1363–1399, 2016.
  32. M. Shaked and J. G. Shanthikumar, Stochastic Orders. Springer Science & Business Media, 2007.
  33. M. Ledoux, The concentration of measure phenomenon. 89, American Mathematical Soc., 2001.
  34. D. Pollard, Convergence of stochastic processes. Springer Series in Statistics, Springer-Verlag New York, 1984.
  35. A. W. Van Der Vaart and J. A. Wellner, Weak Convergence and Empirical Processes With Applications to Statistics. Springer-Verlag New York, 1996.
  36. K. Hinderer, “Lipschitz continuity of value functions in Markovian decision processes,” Mathematical Methods of Operations Research, vol. 62, no. 1, pp. 3–22, 2005.
  37. S. Shao, F. Harirchi, D. Dave, and A. Gupta, “Premptive scheduling of ev charging for providing demand response services,” submitted to IEEE Open Journal of Control Systems, 2022.
  38. A. Defazio, F. Bach, and S. Lacoste-Julien, “SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives,” in Advances in neural information processing systems, pp. 1646–1654, 2014.
  39. R. Harikandeh, M. O. Ahmed, A. Virani, M. Schmidt, J. Konečnỳ, and S. Sallinen, “Stopwasting my gradients: Practical SVRG,” in Advances in Neural Information Processing Systems, pp. 2251–2259, 2015.
  40. A. Sidford, M. Wang, X. Wu, L. Yang, and Y. Ye, “Near-optimal time and sample complexities for solving Markov decision processes with a generative model,” in Advances in Neural Information Processing Systems, pp. 5186–5196, 2018.
  41. M. J. Wainwright, “Variance-reduced Q-learning is minimax optimal,” arXiv preprint arXiv:1906.04697, 2019.
  42. A. Gupta and W. B. Haskell, “Convergence of recursive stochastic algorithms using wasserstein divergence,” SIAM Journal on Mathematics of Data Science, vol. 3, no. 4, pp. 1141–1167, 2021.
  43. Springer Science & Business Media, 1998.
  44. Springer, 2002.
  45. B. Hanin, “Universal function approximation by deep neural nets with bounded width and Relu activations,” Mathematics, vol. 7, no. 10, p. 992, 2019.
  46. I. Steinwart and A. Christmann, Support vector machines. Springer Science & Business Media, 2008.
  47. W. B. Haskell, P. Yu, H. Sharma, and R. Jain, “Randomized function fitting-based empirical value iteration,” in 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 2467–2472, IEEE, 2017.
  48. W. B. Haskell, R. Jain, H. Sharma, and P. Yu, “A universal empirical dynamic programming algorithm for continuous state MDPs,” IEEE Transactions on Automatic Control, vol. 65, no. 1, pp. 115–129, 2019.
  49. H. Sharma, M. Jafarnia-Jahromi, and R. Jain, “Approximate relative value learning for average-reward continuous state MDPs,” in Proceedings UAI, 2019.
  50. H. Sharma and R. Jain, “An approximately optimal relative value learning algorithm for averaged MDPs with continuous states and actions,” in 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 734–740, IEEE, 2019.
  51. H. Sharma, R. Jain, and A. Gupta, “An empirical relative value learning algorithm for non-parametric MDPs with continuous state space,” in 2019 18th European Control Conference (ECC), pp. 1368–1373, IEEE, 2019.
  52. R. Munos and C. Szepesvári, “Finite-time bounds for fitted value iteration,” Journal of Machine Learning Research, vol. 9, no. May, pp. 815–857, 2008.
  53. S. Kakade, M. J. Kearns, and J. Langford, “Exploration in metric state spaces,” in Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 306–312, 2003.
  54. T. Lattimore and M. Hutter, “PAC bounds for discounted MDPs,” in International Conference on Algorithmic Learning Theory, pp. 320–334, Springer, 2012.
  55. B. Szörényi, G. Kedenburg, and R. Munos, “Optimistic planning in Markov decision processes using a generative model,” in Advances in Neural Information Processing Systems, pp. 1035–1043, 2014.
  56. D. Shah and Q. Xie, “Q-learning with nearest neighbors,” in Advances in Neural Information Processing Systems, pp. 3111–3121, 2018.
  57. Springer Science & Business Media, 2012.
  58. E. A. Feinberg and J. Huang, “On the reduction of total-cost and average-cost MDPs to discounted MDPs,” Naval Research Logistics (NRL), vol. 66, no. 1, pp. 38–56, 2019.
  59. E. A. Feinberg and J. Huang, “Reduction of total-cost and average-cost MDPs with weakly continuous transition probabilities to discounted MDPs,” Operations research letters, vol. 46, no. 2, pp. 179–184, 2018.
  60. M. Anthony and P. L. Bartlett, Neural network learning: Theoretical foundations. Cambridge University Press, 2009.
  61. A. Maitra, “Discounted dynamic programming on compact metric spaces,” Sankhyā: The Indian Journal of Statistics, Series A, pp. 211–216, 1968.
  62. T. Kamae, U. Krengel, and G. L. O’Brien, “Stochastic inequalities on partially ordered spaces,” The Annals of Probability, pp. 899–912, 1977.
Citations (9)

Summary

We haven't generated a summary for this paper yet.