Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
124 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simple Lattice Basis Computation -- The Generalization of the Euclidean Algorithm (2311.15902v1)

Published 27 Nov 2023 in cs.DS, cs.DM, and math.NT

Abstract: The Euclidean algorithm is one of the oldest algorithms known to mankind. Given two integral numbers $a_1$ and $a_2$, it computes the greatest common divisor (gcd) of $a_1$ and $a_2$ in a very elegant way. From a lattice perspective, it computes a basis of the sum of two one-dimensional lattices $a_1 \mathbb{Z}$ and $a_2 \mathbb{Z}$ as $\gcd(a_1,a_2) \mathbb{Z} = a_1 \mathbb{Z} + a_2 \mathbb{Z}$. In this paper, we show that the classical Euclidean algorithm can be adapted in a very natural way to compute a basis of a general lattice $L(a_1, \ldots , a_m)$ given vectors $a_1, \ldots , a_m \in \mathbb{Z}n$ with $m> \mathrm{rank}(a_1, \ldots ,a_m)$. Similar to the Euclidean algorithm, our algorithm is very easy to describe and implement and can be written within 12 lines of pseudocode. While the Euclidean algorithm halves the largest number in every iteration, our generalized algorithm halves the determinant of a full rank subsystem leading to at most $\log (\det B)$ many iterations, for some initial subsystem $B$. Therefore, we can compute a basis of the lattice using at most $\tilde{O}((m-n)n\log(\det B) + mn{\omega-1}\log(||A||_\infty))$ arithmetic operations, where $\omega$ is the matrix multiplication exponent and $A = (a_1, \ldots, a_m)$. Even using the worst case Hadamard bound for the determinant, our algorithm improves upon existing algorithm. Another major advantage of our algorithm is that we can bound the entries of the resulting lattice basis by $\tilde{O}(n2\cdot ||A||_{\infty})$ using a simple pivoting rule. This is in contrast to the typical approach for computing lattice basis, where the Hermite normal form (HNF) is used. In the HNF, entries can be as large as the determinant and hence can only be bounded by an exponential term.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. A refined laser method and faster matrix multiplication. In SODA, pages 522–539. SIAM, 2021.
  2. Deterministic reduction of integer nonsingular linear system solving to matrix multiplication. In ISSAC 2019, pages 58–65. ACM, 2019.
  3. Computing a lattice basis from a system of generating vectors. In EUROCAL ’87, volume 378 of Lecture Notes in Computer Science, pages 54–63. Springer, 1987.
  4. Algorithms for the solution of systems of linear diophantine equations. SIAM J. Comput., 11(4):687–708, 1982.
  5. Near-optimal algorithms for linear algebra in the current matrix multiplication time. In SODA, pages 3043–3068. SIAM, 2022.
  6. An improved worst-case to average-case connection for lattice problems. In FOCS, pages 468–477. IEEE Computer Society, 1997.
  7. Trapdoors for hard lattices and new cryptographic constructions. In ACM Symposium on Theory of Computing, pages 197–206. ACM, 2008.
  8. Improved rectangular matrix multiplication using powers of the coppersmith-winograd tensor. In SODA 2018, pages 1029–1046. SIAM, 2018.
  9. Asymptotically fast triangularization of matrices over rings. SIAM J. Comput., pages 1068–1083, 1991.
  10. Analyzing blockwise lattice algorithms using dynamical systems. In CRYPTO, volume 6841 of Lecture Notes in Computer Science, pages 447–464. Springer, 2011.
  11. Costas S. Iliopoulos. Worst-case complexity bounds on algorithms for computing the canonical structure of finite abelian groups and the hermite and smith normal forms of an integer matrix. SIAM J. Comput., 18(4):658–669, 1989.
  12. Polynomial algorithms for computing the smith and hermite normal forms of an integer matrix. SIAM J. Comput., pages 499–507, 1979.
  13. Computing a lattice basis revisited. In ISSAC 2019, pages 275–282. ACM, 2019.
  14. Computing a basis for an integer lattice: A special case. In ISSAC ’22, pages 303–310. ACM, 2022.
  15. Complexity of lattice problems - a cryptograhic perspective, volume 671 of The Kluwer international series in engineering and computer science. Springer, 2002.
  16. Faster LLL-type reduction of lattice bases. In ISSAC, pages 373–380. ACM, 2016.
  17. An LLL-reduction algorithm with quasi-linear time complexity: extended abstract. In STOC, pages 403–412. ACM, 2011.
  18. Michael Pohst. A modification of the LLL reduction algorithm. J. Symb. Comput., 4(1):123–127, 1987.
  19. Arnold Schönhage. Schnelle Berechnung von Kettenbruchentwicklungen. Acta Informatica, 1:139–144, 1971.
  20. Claus-Peter Schnorr. Fast LLL-type lattice reduction. Inf. Comput., 204(1):1–25, 2006.
  21. Jack Sherman. Adjustment of an inverse matrix corresponding to changes in the elements of a given column or a given row of the original matrix. Annals of mathematical statistics, 20(4):621, 1949.
  22. Asymptotically fast computation of hermite normal forms of integer matrices. In ISSAC ’96, pages 259–266. ACM, 1996.
  23. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. The Annals of Mathematical Statistics, 21(1):124–127, 1950.
  24. Jan van den Brand. A deterministic linear program solver in current matrix multiplication time. In SODA 2020, pages 259–278. SIAM, 2020.
  25. Dynamic matrix inverse: Improved algorithms and matching conditional lower bounds. In FOCS, pages 456–480. IEEE Computer Society, 2019.
Citations (2)

Summary

We haven't generated a summary for this paper yet.