Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Strongly Polynomial Frame Scaling to High Precision (2402.04799v1)

Published 7 Feb 2024 in cs.DS and math.OC

Abstract: The frame scaling problem is: given vectors $U := {u_{1}, ..., u_{n} } \subseteq \mathbb{R}{d}$, marginals $c \in \mathbb{R}{n}_{++}$, and precision $\varepsilon > 0$, find left and right scalings $L \in \mathbb{R}{d \times d}, r \in \mathbb{R}n$ such that $(v_1,\dots,v_n) := (Lu_1 r_1,\dots,Lu_nr_n)$ simultaneously satisfies $\sum_{i=1}n v_i v_i{\mathsf{T}} = I_d$ and $|v_{j}|{2}{2} = c{j}, \forall j \in [n]$, up to error $\varepsilon$. This problem has appeared in a variety of fields throughout linear algebra and computer science. In this work, we give a strongly polynomial algorithm for frame scaling with $\log(1/\varepsilon)$ convergence. This answers a question of Diakonikolas, Tzamos and Kane (STOC 2023), who gave the first strongly polynomial randomized algorithm with poly$(1/\varepsilon)$ convergence for the special case $c = \frac{d}{n} 1_{n}$. Our algorithm is deterministic, applies for general $c \in \mathbb{R}{n}_{++}$, and requires $O(n{3} \log(n/\varepsilon))$ iterations as compared to $O(n{5} d{11}/\varepsilon{5})$ iterations of DTK. By lifting the framework of Linial, Samorodnitsky and Wigderson (Combinatorica 2000) for matrix scaling to frames, we are able to simplify both the algorithm and analysis. Our main technical contribution is to generalize the potential analysis of LSW to the frame setting and compute an update step in strongly polynomial time that achieves geometric progress in each iteration. In fact, we can adapt our results to give an improved analysis of strongly polynomial matrix scaling, reducing the $O(n{5} \log(n/\varepsilon))$ iteration bound of LSW to $O(n{3} \log(n/\varepsilon))$. Additionally, we prove a novel bound on the size of approximate frame scaling solutions, involving the condition measure $\bar{\chi}$ studied in the linear programming literature, which may be of independent interest.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Much faster algorithms for matrix scaling. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), 2017.
  2. Operator Scaling via Geodesically Convex Optimization, Invariant Theory and Polynomial Identity Testing. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC), 2018.
  3. On radial isotropic position: Theory and algorithms. arXiv preprint arXiv:2005.04918, 2020.
  4. Frank Barthe. On a reverse form of the Brascamp-Lieb inequality. Inventiones mathematicae, 134(2), 1998.
  5. Rajendra Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
  6. Efficient algorithms for tensor scaling, quantum marginals, and moment polytopes. In 2018 IEEE Symposium on Foundations of Computer Science (FOCS), 2018.
  7. Towards a theory of non-commutative optimization: geodesic 1st and 2nd order methods for moment maps and polytopes. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 845–861. IEEE, 2019.
  8. Interior-point methods for unconstrained geometric programming and scaling problems. arXiv preprint arXiv:2008.12110, 2020.
  9. Finite Frames: Theory and Applications. Birkhauser Basel, 2013.
  10. Matrix scaling and balancing via box constrained Newton’s method and interior point methods. In 2017 IEEE Symposium on Foundations of Computer Science (FOCS). IEEE, 2017.
  11. A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix. In Proceedings of the 52nd Annual ACM Symposium on Theory of Computing (STOC). ACM, 2020.
  12. A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix. Mathematical Programming, pages 1–72, 2023.
  13. An Accelerated Newton-Dinkelbach Method and Its Application to Two Variables per Inequality Systems. In 29th Annual European Symposium on Algorithms (ESA), 2021.
  14. Forster decomposition and learning halfspaces with noise. In 2021 Conference on Neural Information Processing Systems (NeurIPS), 2021.
  15. A strongly polynomial algorithm for approximate forster transforms and its application to halfspace learning. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 1741–1754, 2023.
  16. W.  Dinkelbach. On nonlinear fractional programming. Management Science, 1967.
  17. A simple polynomial-time rescaling algorithm for solving linear programs. In Proceedings of the 36th Annual Symposium on Theory of Computing (STOC). ACM, 2004.
  18. Jack Edmonds. Systems of Distinct Representatives and Linear Algebra. In Journal of Research of the National Bureau of Standards. 1967.
  19. Xin Gui Fang, and George Havas. On the Worst-case Complexity of Integer Gaussian Elimination. In Proceedings of the International Symposium on Symbolic and Algebraic Computation (ISSAC). 1997.
  20. Jurgen Forster. A linear lower bound on the unbounded error probabilistic communication complexity. Journal of Computer and System Sciences, 65, 2002.
  21. A deterministic polynomial time algorithm for non-commutative rational identity testing. 57th IEEE Annual Symposium on Foundations of Computer Science (FOCS), 2016.
  22. Algorithmic and optimization aspects of Brascamp-Lieb inequalities, via operator scaling. Geometric and Functional Analysis, 28, 2018.
  23. Combinatorial geometries, convex polyhedra, and schubert cells. Advances in Mathematics, 1987.
  24. Geometric Algorithms and Combinatorial Optimization. Springer, 1993.
  25. Algorithms and hardness for robust subspace recovery. In 26th Annual Conference on Learning Theory (COLT), 2013.
  26. R.B.  Holmes, and V.I. Paulsen. Optimal frames for erasures. Linear Algebra and its Applications, 2004.
  27. Point location and active learning: Learning halfspaces almost optimally. In 61st IEEE Annual Symposium on Foundations of Computer Science (FOCS), 2020.
  28. M. Idel. A review of matrix scaling and Sinkhorn’s normal form for matrices and positive maps. arXiv preprint arXiv:1609.06349, 2016.
  29. Leonid Kachiyan. On the complexity of approximating extremal determinants in matrices. Journal of Complexity, 11, 1995.
  30. The length of vectors in representation spaces. In Knud Lønsted, editor, Algebraic Geometry, pages 233–243, Berlin, Heidelberg, 1979. Springer Berlin Heidelberg.
  31. Donald Knuth. Semi-optimal bases for linear dependencies. Linear and Multilinear Algebra, 17, 1985.
  32. A polynomial predictor-corrector trust-region algorithm for linear programming. SIAM Journal on Optimization, 19(4):1918–1946, 2009.
  33. A deterministic strongly polynomial algorithm for matrix scaling and approximate permanents. Combinatorica, 20(4):545–568, 2000.
  34. N. Megiddo. Towards a genuinely polynomial algorithm for linear programming. SIAM Journal on Computing, 1983.
  35. E. Tardos. A strongly polynomial minimum cost circulation algorithm. Combinatorica, 1985.
  36. A variant of the vavasis–ye layered-step interior-point algorithm for linear programming. SIAM Journal on Optimization, 13(4):1054–1079, 2003.
  37. T. Radzik. Newton’s method for fractional combinatorial optimization. In Proceedings of the 33rd Annual Symposium on Foundations of Computer Science (FOCS), 1992.
  38. Entropy, optimization and counting. In Proceedings of the Forty-Sixth Annual Symposium on Theory of Computing (STOC). ACM, 2014.
  39. Stephen Smale. Mathematical problems for the next century. In The Mathematical Intelligencer. 1998.
  40. Nikhil Srivastava. The Complexity of Diagonalization. In Proceedings of the International Symposium on Symbolic and Algebraic Computation (ISSAC). 2023.
  41. Maximum entropy distributions: Bit complexity and stability. In Proceedings of the Thirty-Second Conference on Learning Theory (COLT). ACM, 2019.
  42. Levent Tuncel. Approximating the complexity measure of Vavasis-Ye algorithm is NP-hard. Mathematical Programming, 1999.
  43. Condition numbers for polyhedra with real number data. Operations Research Letters, 1995.
  44. A primal-dual interior point method whose running time depends only on the constraint matrix. Mathematical Programming, 1996.
  45. Log-barrier interior point methods are not strongly polynomial. SIAM Journal on Applied Algebra and Geometry, 2018.
Citations (1)

Summary

We haven't generated a summary for this paper yet.