The Bit Complexity of Dynamic Algebraic Formulas and their Determinants (2401.11127v2)
Abstract: Many iterative algorithms in optimization, computational geometry, computer algebra, and other areas of computer science require repeated computation of some algebraic expression whose input changes slightly from one iteration to the next. Although efficient data structures have been proposed for maintaining the solution of such algebraic expressions under low-rank updates, most of these results are only analyzed under exact arithmetic (real-RAM model and finite fields) which may not accurately reflect the complexity guarantees of real computers. In this paper, we analyze the stability and bit complexity of such data structures for expressions that involve the inversion, multiplication, addition, and subtraction of matrices under the word-RAM model. We show that the bit complexity only increases linearly in the number of matrix operations in the expression. In addition, we consider the bit complexity of maintaining the determinant of a matrix expression. We show that the required bit complexity depends on the logarithm of the condition number of matrices instead of the logarithm of their determinant. We also discuss rank maintenance and its connections to determinant maintenance. Our results have wide applications ranging from computational geometry (e.g., computing the volume of a polytope) to optimization (e.g., solving linear programs using the simplex algorithm).
- Iterative refinement for ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-norm regression. In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 1405–1424. SIAM, 2019.
- Fast, provably convergent IRLS algorithm for p-norm linear regression. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 14166–14177, 2019.
- Faster p-norm minimizing flows, via smoothed q-norm problems. In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 892–910. SIAM, 2020.
- An homotopy method for lp regression provably beyond self-concordance and in input-sparsity time. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1130–1137, 2018.
- The gram-schmidt walk: a cure for the banaszczyk blues. In Proceedings of the 50th annual acm sigact symposium on theory of computing, pages 587–597, 2018.
- New techniques and fine-grained hardness for dynamic near-additive spanners. In SODA, pages 1836–1855. SIAM, 2021.
- Solving tall dense linear programs in nearly linear time. In Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 22-26, 2020, pages 775–788. ACM, 2020.
- Using fast matrix multiplication to find basic solutions. Theoretical Computer Science, 205(1):307–316, 1998.
- Jan van den Brand and Danupon Nanongkai. Dynamic approximate shortest paths and beyond: Subquadratic and worst-case update time. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 436–455. IEEE, 2019.
- Dynamic matrix inverse: Improved algorithms and matching conditional lower bounds. In FOCS, pages 456–480. IEEE Computer Society, 2019.
- Jan van den Brand. A deterministic linear program solver in current matrix multiplication time. In SODA, pages 259–278. SIAM, 2020.
- Jan van den Brand. Unifying matrix data structures: Simplifying and speeding up iterative algorithms. In Symposium on Simplicity in Algorithms (SOSA), pages 1–13. SIAM, 2021.
- Maximum flow and minimum-cost flow in almost-linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 612–623. IEEE, 2022.
- Introduction to algorithms. MIT press, third edition edition, 2009.
- Solving linear programs in the current matrix multiplication time. J. ACM, 68(1):3:1–3:39, 2021.
- Fast linear algebra is stable. Numerische Mathematik, 108(1):59–91, 2007.
- Alan Edelman. Eigenvalues and condition numbers of random matrices. SIAM journal on matrix analysis and applications, 9(4):543–560, 1988.
- Alan Edelman. Eigenvalues and condition numbers of random matrices. PhD thesis, Massachusetts Institute of Technology, 1989.
- Faster geometric algorithms via dynamic determinant computation. Comput. Geom., 54:1–16, 2016.
- James F. Geelen. An algebraic matching algorithm. Combinatorica, 20(1):61–70, Jan 2000.
- The bit complexity of efficient continuous optimization. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2023.
- Topics in Matrix Analysis. Cambridge University Press, 1991.
- Solving SDP faster: A robust ipm framework and efficient implementation. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 233–244. IEEE, 2022.
- Balancing covariates in randomized experiments with the gram-schmidt walk design. arXiv preprint arXiv:1911.03071, 2019.
- A faster interior point method for semidefinite programming. In 2020 IEEE 61st annual symposium on foundations of computer science (FOCS), pages 910–918. IEEE, 2020.
- The complexity of dynamic least-squares regression. In FOCS, 2023.
- A faster algorithm for solving general LPs. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 823–832, 2021.
- Learning-augmented control via online adaptive policy selection: No regret via contractive perturbations. In ACM SIGMETRICS, Workshop on Learning-augmented Algorithms: Theory and Applications 2023, 2023.
- Online adaptive policy selection in time-varying systems: No-regret via contractive perturbations. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- Path finding I: Solving linear programs with O~(rank)~𝑂rank\widetilde{O}(\sqrt{\text{rank}})over~ start_ARG italic_O end_ARG ( square-root start_ARG rank end_ARG ) linear system solves. arXiv preprint arXiv:1312.6677, 2013.
- Piotr Sankowski. Dynamic transitive closure via dynamic matrix inverse (extended abstract). In FOCS, pages 509–517. IEEE Computer Society, 2004.
- Piotr Sankowski. Faster dynamic matchings and vertex connectivity. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’07, page 118–126, USA, 2007. Society for Industrial and Applied Mathematics.
- On random ±1plus-or-minus1\pm 1± 1 matrices: singularity and determinant. In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pages 431–440, 2005.
- Max A Woodbury. Inverting modified matrices. Statistical Research Group, 1950.
- New bounds for matrix multiplication: from alpha to omega. CoRR, abs/2307.07970, 2023.