Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 45 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 111 tok/s Pro
Kimi K2 178 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Variance-reduction for Variational Inequality Problems with Bregman Distance Function (2405.10735v2)

Published 17 May 2024 in math.OC

Abstract: In this paper, we address variational inequalities (VI) with a finite-sum structure. We introduce a novel single-loop stochastic variance-reduced algorithm, incorporating the Bregman distance function, and establish an optimal convergence guarantee under a monotone setting. Additionally, we explore a structured class of non-monotone problems that exhibit weak Minty solutions, and analyze the complexity of our proposed method, highlighting a significant improvement over existing approaches. Numerical experiments are presented to demonstrate the performance of our algorithm compared to state-of-the-art methods

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. A bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima. SIAM Journal on Optimization, 31(1):653–685, 2021.
  2. Extending the reach of first-order algorithms for nonconvex min-max problems with cohypomonotonicity. arXiv preprint arXiv:2402.05071, 2024.
  3. A. Alacaoglu and Y. Malitsky. Stochastic variance reduction for variational inequality methods. ArXiv, abs/2102.08352, 2021.
  4. A. Alacaoglu and Y. Malitsky. Stochastic variance reduction for variational inequality methods. In Conference on Learning Theory, pages 778–816. PMLR, 2022.
  5. Z. Allen-Zhu and E. Hazan. Variance reduction for faster non-convex optimization. In International conference on machine learning, pages 699–707. PMLR, 2016.
  6. Clustering with bregman divergences. Journal of machine learning research, 6(10), 2005.
  7. Generalized monotone operators and their averaged resolvents. Mathematical Programming, 189:55–74, 2021.
  8. A. Böhm. Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions. arXiv preprint arXiv:2201.12247, 2022.
  9. Y. Cai and W. Zheng. Accelerated single-call methods for constrained min-max optimization. arXiv preprint arXiv:2210.03096, 2022.
  10. Libsvm: A library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011.
  11. Single-call stochastic extragradient methods for structured non-monotone variational inequalities: Improved analysis under weaker conditions. Advances in Neural Information Processing Systems, 36, 2024.
  12. Variational methods for machine learning with applications to deep networks. Springer, 2021.
  13. Independent policy gradient methods for competitive reinforcement learning. Advances in neural information processing systems, 33:5527–5540, 2020.
  14. A. M. Devraj and J. Chen. Stochastic variance reduced primal dual algorithms for empirical composition optimization. Advances in Neural Information Processing Systems, 32, 2019.
  15. Efficient methods for structured nonconvex-nonconcave min-max optimization. In International Conference on Artificial Intelligence and Statistics, pages 2746–2754. PMLR, 2021.
  16. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. Advances in neural information processing systems, 31, 2018.
  17. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
  18. A variational inequality approach to bayesian regression games. In 2021 60th IEEE Conference on Decision and Control (CDC), pages 795–802. IEEE, 2021.
  19. A primal-dual algorithm with line search for general convex-concave saddle point problems. SIAM Journal on Optimization, 31(2):1299–1329, 2021.
  20. Lower complexity bounds of finite-sum optimization problems: The results and construction. Journal of Machine Learning Research, 25(2):1–86, 2024.
  21. Extragradient method with variance reduction for stochastic variational inequalities. SIAM Journal on Optimization, 27(2):686–724, 2017.
  22. R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems, 26, 2013.
  23. Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17–58, 2011.
  24. Adaptive stochastic variance reduction for non-convex finite-sum minimization. Advances in Neural Information Processing Systems, 35:23524–23538, 2022.
  25. Revisiting generalized nash games and variational inequalities. Journal of Optimization Theory and Applications, 154:175–186, 2012.
  26. Finite-sum composition optimization via variance reduced gradient descent. In Artificial Intelligence and Statistics, pages 1159–1167. PMLR, 2017.
  27. Near optimal stochastic algorithms for finite-sum unbalanced convex-concave minimax optimization. arXiv preprint arXiv:2106.01761, 2021.
  28. Y. Malitsky and M. K. Tam. A forward-backward splitting method for monotone inclusions without cocoercivity. SIAM Journal on Optimization, 30(2):1451–1472, 2020.
  29. H. Namkoong and J. C. Duchi. Stochastic gradient methods for distributionally robust optimization with f-divergences. Advances in neural information processing systems, 29, 2016.
  30. A. Nemirovski. Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1):229–251, 2004.
  31. Robust stochastic approximation approach to stochastic programming. SIAM Journal on optimization, 19(4):1574–1609, 2009.
  32. Y. Nesterov. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming, 109(2):319–344, 2007.
  33. J. v. Neumann. A model of general economic equilibrium. The Review of Economic Studies, 13(1):1–9, 1945.
  34. B. Palaniappan and F. Bach. Stochastic variance reduction methods for saddle-point problems. Advances in Neural Information Processing Systems, 29, 2016.
  35. F. Parise and A. Ozdaglar. A variational inequality framework for network games: Existence, uniqueness, convergence and sensitivity analysis. Games and Economic Behavior, 114:47–82, 2019.
  36. Solving stochastic weak minty variational inequalities without increasing batch size. arXiv preprint arXiv:2302.09029, 2023.
  37. Escaping limit cycles: Global convergence for constrained nonconvex-nonconcave minimax problems. arXiv preprint arXiv:2302.09831, 2023.
  38. Stable nonconvex-nonconcave training via linear interpolation. Advances in Neural Information Processing Systems, 36, 2024.
  39. Optimal analysis of method with batching for monotone stochastic finite-sum variational inequalities. arXiv preprint arXiv:2401.07858, 2024.
  40. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 162:83–112, 2017.
  41. An example of use of variational methods in quantum machine learning. In International Conference on Computational Science and Its Applications, pages 597–609. Springer, 2022.
  42. V. V. Singh and A. Lisser. Variational inequality formulation for the games with random payoffs. Journal of Global Optimization, 72:743–760, 2018.
  43. C. Song and J. Diakonikolas. Cyclic coordinate dual averaging with extrapolation. SIAM Journal on Optimization, 33(4):2935–2961, 2023.
  44. P. Tseng. A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization, 38(2):431–446, 2000.
  45. P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM Journal on Optimization, 2(3), 2008.
  46. T. Ui. Bayesian nash equilibrium and variational inequalities. Journal of Mathematical Economics, 63:139–146, 2016.
  47. X. Wang and Z. Wang. A bregman inertial forward-reflected-backward method for nonconvex minimization. Journal of Global Optimization, pages 1–28, 2023.
  48. Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems. arXiv preprint arXiv:2002.09621, 2020.
  49. Enhanced fault tolerant kinematic control of redundant robots with linear-variational-inequality based zeroing neural network. Engineering Applications of Artificial Intelligence, 133:108068, 2024.
  50. E. Yazdandoost Hamedani and A. Jalilzadeh. A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems. Computational Optimization and Applications, 85(2):653–679, 2023.
  51. J. Zhang and L. Xiao. A composite randomized incremental gradient method. In International Conference on Machine Learning, pages 7454–7462. PMLR, 2019.
  52. Variational inequality for n-player strategic chance-constrained games. SN Computer Science, 4(1):82, 2022.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com