Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disciplined Saddle Programming (2301.13427v2)

Published 31 Jan 2023 in math.OC and cs.MS

Abstract: We consider convex-concave saddle point problems, and more generally convex optimization problems we refer to as $\textit{saddle problems}$, which include the partial supremum or infimum of convex-concave saddle functions. Saddle problems arise in a wide range of applications, including game theory, machine learning, and finance. It is well known that a saddle problem can be reduced to a single convex optimization problem by dualizing either the convex (min) or concave (max) objectives, reducing a min-max problem into a min-min (or max-max) problem. Carrying out this conversion by hand can be tedious and error prone. In this paper we introduce $\textit{disciplined saddle programming}$ (DSP), a domain specific language (DSL) for specifying saddle problems, for which the dualizing trick can be automated. The language and methods are based on recent work by Juditsky and Nemirovski arXiv:2102.01002 [math.OC], who developed the idea of conic-representable saddle point programs, and showed how to carry out the required dualization automatically using conic duality. Juditsky and Nemirovski's conic representation of saddle problems extends Nesterov and Nemirovski's earlier development of conic representable convex problems; DSP can be thought of as extending disciplined convex programming (DCP) to saddle problems. Just as DCP makes it easy for users to formulate and solve complex convex problems, DSP allows users to easily formulate and solve saddle problems. Our method is implemented in an open-source package, also called DSP.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. H. Markowitz “Portfolio Selection” In Journal of Finance 7, 1952, pp. 77–91
  2. O. Morgenstern and J. Von Neumann “Theory of games and economic behavior” Princeton University Press, 1953
  3. M. Sion “On general minimax theorems” In Pacific Journal of Mathematics 8.1, 1958, pp. 171–176
  4. R. Rockafellar “Convex analysis” Princeton university press, 1970
  5. G. Korpelevich “The extragradient method for finding saddle points and other problems” In Matecon 12, 1976, pp. 747–756
  6. “Introduction to minimax” Courier Corporation, 1990
  7. “Asset Allocation” In The Journal of Fixed Income 1.2 Institutional Investor Journals Umbrella, 1991, pp. 7–18 URL: https://jfi.pm-research.com/content/1/2/7
  8. “Conic formulation of a convex programming problem and duality” In Optimization Methods & Software 1, 1992, pp. 95–115
  9. “Semi-infinite programming: Theory, methods, and applications” In SIAM review 35.3, 1993, pp. 380–429
  10. “Minimax and applications” Springer Science & Business Media, 1995
  11. A. Nemirovski “On self-concordant convex–concave functions” In Optimization Methods and Software 11.1-4 Taylor & Francis, 1999, pp. 303–384 DOI: 10.1080/10556789908805755
  12. “The worst-case risk of a portfolio”, 2000 URL: https://web.stanford.edu/~boyd/papers/pdf/risk_bnd.pdf
  13. “Robust Portfolio Selection Problems” In Mathematics of Operations Research 28.1 INFORMS, 2003, pp. 1–38 URL: http://www.jstor.org/stable/4126989
  14. “An interior-point method for a class of saddle-point problems” In Journal of Optimization Theory and Applications 116.3 Springer, 2003, pp. 559–590
  15. “Convex Optimization” Cambridge University Press, 2004
  16. J. Lofberg “YALMIP : a toolbox for modeling and optimization in MATLAB” In 2004 IEEE International Conference on Robotics and Automation (IEEE Cat. No.04CH37508), 2004, pp. 284–289 DOI: 10.1109/CACSD.2004.1393890
  17. A. Nemirovski “Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems” In SIAM Journal on Optimization 15.1 SIAM, 2004, pp. 229–251
  18. Y. Nesterov “Excessive gap technique in nonsmooth convex minimization” In SIAM Journal on Optimization 16.1 SIAM, 2005, pp. 235–249
  19. Y. Nesterov “Smooth minimization of non-smooth functions” In Mathematical programming 103.1 Springer, 2005, pp. 127–152
  20. “Convex Analysis” Springer, 2006
  21. M. Grant, S. Boyd and Y. Ye “Disciplined convex programming” In Global optimization Springer, 2006, pp. 155–210
  22. “Cubic regularization of Newton method and its global performance” In Mathematical Programming 108.1 Springer, 2006, pp. 177–205
  23. Y. Nesterov “Dual extrapolation and its applications to solving variational inequalities and related problems” In Mathematical Programming 109.2 Springer, 2007, pp. 319–344
  24. Y. Nesterov “Accelerating the cubic regularization of Newton’s method on convex problems” In Mathematical Programming 112.1 Springer, 2008, pp. 159–181
  25. A. Ben-Tal, L. El Ghaoui and A. Nemirovski “Robust optimization” Princeton university press, 2009
  26. “Cutting-set methods for robust convex optimization with pessimizing oracles” In Optimization Methods & Software 24.3 Taylor & Francis, 2009, pp. 381–406
  27. “Subgradient methods for saddle-point problems” In Journal of optimization theory and applications 142.1 Springer, 2009, pp. 205–228
  28. “Variational analysis” Springer Science & Business Media, 2009
  29. D. Bertsimas, D. Brown and C. Caramanis “Theory and applications of robust optimization” In SIAM review 53.3 SIAM, 2011, pp. 464–501
  30. “A first-order primal-dual algorithm for convex problems with applications to imaging” In Journal of mathematical imaging and vision 40.1 Springer, 2011, pp. 120–145
  31. Y. Chen, G. Lan and Y. Ouyang “Optimal Primal-Dual Methods for a Class of Saddle Point Problems” arXiv, 2013 DOI: 10.48550/ARXIV.1309.5548
  32. L. Condat “A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms” In Journal of optimization theory and applications 158.2 Springer, 2013, pp. 460–479
  33. “Generative Adversarial Nets” In Advances in Neural Information Processing Systems 27 Curran Associates, Inc., 2014
  34. “CVX: Matlab software for disciplined convex programming, version 2.1”, 2014
  35. “Convex Optimization in Julia” In SC14 Workshop on High Performance Technical Computing in Dynamic Languages, 2014 arXiv:1410.4821 [math-oc]
  36. “Preconditioned Douglas–Rachford splitting methods for convex-concave saddle-point problems” In SIAM Journal on Numerical Analysis 53.1 SIAM, 2015, pp. 421–444
  37. “A five-factor asset pricing model” In Journal of Financial Economics 116.1, 2015, pp. 1–22 DOI: https://doi.org/10.1016/j.jfineco.2014.10.010
  38. “On the ergodic convergence rates of a first-order primal–dual algorithm” In Mathematical Programming 159.1 Springer, 2016, pp. 253–287
  39. “CVXPY: A Python-embedded modeling language for convex optimization” In Journal of Machine Learning Research 17.83, 2016, pp. 1–5
  40. “Multi-Period Trading via Convex Optimization” In Foundations and Trends in Optimization 3.1, 2017, pp. 1–76 DOI: 10.1561/2400000023
  41. F. Harrell Jr. and T. Cason “Titanic dataset”, 2017 URL: https://www.openml.org/d/40945
  42. G. Cornuéjols, J. Peña and R. Tütüncü “Optimization Methods in Finance” Cambridge University Press, 2018 DOI: 10.1017/9781107297340
  43. “Distributionally robust optimization with correlated data from vector autoregressive processes” In Operations Research Letters 47.4, 2019, pp. 294–299 DOI: https://doi.org/10.1016/j.orl.2019.04.005
  44. “Efficient Algorithms for Smooth Minimax Optimization” In Advances in Neural Information Processing Systems 32 Curran Associates, Inc., 2019 URL: https://proceedings.neurips.cc/paper/2019/file/05d0abb9a864ae4981e933685b8b915c-Paper.pdf
  45. T. Broderick, R. Giordano and R. Meager “An Automatic Finite-Sample Robustness Metric: When Can Dropping a Little Data Make a Big Difference?” In arXiv preprint arXiv:2011.14999, 2020
  46. A. Fu, B. Narasimhan and S. Boyd “CVXR: An R Package for Disciplined Convex Optimization” In Journal of Statistical Software 94.14, 2020, pp. 1–34 DOI: 10.18637/jss.v094.i14
  47. T. Lin, C. Jin and M. Jordan “Near-optimal algorithms for minimax optimization” In Conference on Learning Theory, 2020, pp. 2738–2779 PMLR
  48. S. Barratt, G. Angeris and S. Boyd “Optimal representative sample weighting” In Statistics and Computing 31.2 Springer, 2021, pp. 1–14
  49. K. French “Kenneth R. French Data Library”, 2022 URL: http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html
  50. “On well-structured convex–concave saddle point problems and variational inequalities with monotone operators” In Optimization Methods and Software 37.5 Taylor & Francis, 2022, pp. 1567–1602 DOI: 10.1080/10556788.2021.1928121
  51. E. Luxenberg, P. Schiele and S. Boyd “Robust Bond Portfolio Construction via Convex-Concave Saddle Point Optimization”, 2022 DOI: 10.48550/ARXIV.2212.02570
  52. “Large-Scale Convex Optimization: Algorithms & Analyses via Monotone Operators” Cambridge University Press, 2022
Citations (6)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com