Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors (2402.01082v1)

Published 2 Feb 2024 in cs.CR and cs.LG

Abstract: Learning with Errors (LWE) is a hard math problem underlying recently standardized post-quantum cryptography (PQC) systems for key exchange and digital signatures. Prior work proposed new ML-based attacks on LWE problems with small, sparse secrets, but these attacks require millions of LWE samples to train on and take days to recover secrets. We propose three key methods -- better preprocessing, angular embeddings and model pre-training -- to improve these attacks, speeding up preprocessing by $25\times$ and improving model sample efficiency by $10\times$. We demonstrate for the first time that pre-training improves and reduces the cost of ML attacks on LWE. Our architecture improvements enable scaling to larger-dimension LWE problems: this work is the first instance of ML attacks recovering sparse binary secrets in dimension $n=1024$, the smallest dimension used in practice for homomorphic encryption applications of LWE where sparse binary secrets are proposed.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. A Novel Dynamic Attack on Classical Ciphers Using an Attention-Based LSTM Encoder-Decoder Model. IEEE Access, 2021.
  2. Ajtai, M. Generating hard instances of lattice problems. In Proc. of the ACM symposium on Theory of Computing, 1996.
  3. Alani, M. M. Neuro-cryptanalysis of DES and triple-DES. In Proc. of NeurIPS, 2012.
  4. Homomorphic encryption standard. In Protecting Privacy through Homomorphic Encryption, pp.  31–62. 2021. https://eprint.iacr.org/2019/939.
  5. Albrecht, M. R. On dual lattice attacks against small-secret LWE and parameter choices in HElib and SEAL. In Proc. of EUROCRYPT, 2017. ISBN 978-3-319-56614-6.
  6. On the concrete hardness of learning with errors. Journal of Mathematical Cryptology, 9(3):169–203, 2015.
  7. Revisiting the expected cost of solving usvp and applications to lwe. In Proc. of ASIACRYPT, 2017.
  8. Can sequence-to-sequence models crack substitution ciphers? arXiv preprint arXiv:2012.15229, 2020.
  9. CRYSTALS-Kyber (version 3.02) – Submission to round 3 of the NIST post-quantum project. 2021. Available at https://pq-crystals.org/.
  10. Recent advances of neural attacks against block ciphers. In Proc. of SCIS, 2020.
  11. A deeper look at machine learning-based cryptanalysis. In Proc. of Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2021.
  12. Language models are few-shot learners. Proc. of NeurIPS, 2020.
  13. Charton, F. Linear algebra with transformers. Transactions in Machine Learning Research, 2022.
  14. Charton, F. Can transformers learn the greatest common divisor? arXiv:2308.15594, 2024.
  15. An efficient algorithm for integer lattice reduction. SIAM Journal on Matrix Analysis and Applications, 45(1), 2024.
  16. On the Concrete Security of LWE with Small Secret. Cryptology ePrint Archive, Paper 2020/539, 2020. URL https://eprint.iacr.org/2020/539.
  17. PQC Standardization Process: Announcing Four Candidates to be Standardized, Plus Fourth Round Candidates. US Department of Commerce, NIST, 2022. https://csrc.nist.gov/News/2022/pqc-candidates-to-be-standardized-and-round-4.
  18. BKZ 2.0: Better Lattice Security Estimates. In Proc. of ASIACRYPT, 2011.
  19. Bridging Machine Learning and Cryptanalysis via EDLCT. Cryptology ePrint Archive, 2021. https://eprint.iacr.org/2021/705.
  20. On the feasibility and impact of standardising sparse-secret LWE parameter sets for homomorphic encryption. In Proc. of the ACM Workshop on Encrypted Computing & Applied Homomorphic Cryptography, 2019.
  21. Universal transformers. In Proc. of ICLR, 2019.
  22. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, 2019. URL https://aclanthology.org/N19-1423.
  23. Breaking a fifth-order masked implementation of crystals-kyber by copy-paste. Cryptology ePrint Archive, 2022. https://eprint.iacr.org/2022/1713.
  24. Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654, 2023.
  25. Gohr, A. Improving attacks on round-reduced speck32/64 using deep learning. In Proc. of Annual International Cryptology Conference, 2019.
  26. Goncharov, S. V. Using fuzzy bits and neural networks to partially invert few rounds of some cryptographic hash functions. arXiv preprint arXiv:1901.02438, 2019.
  27. Greydanus, S. Learning the enigma with recurrent neural networks. arXiv preprint arXiv:1708.07576, 2017.
  28. Solving Arithmetic Word Problems with Transformers and Preprocessing of Problem Text. arXiv preprint arXiv:2106.00893, 2021.
  29. Gromov, A. Grokking modular arithmetic. arXiv preprint arXiv:2301.02679, 2023.
  30. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654, 2020.
  31. Neural GPUs learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
  32. Grid long short-term memory. arXiv preprint arxiv:1507.01526, 2015.
  33. Output Prediction Attacks on SPN Block Ciphers using Deep Learning. Cryptology ePrint Archive, 2021. URL https://eprint.iacr.org/2021/401.
  34. Big transfer (bit): General visual representation learning. In Proc. of ECCV, 2020.
  35. Deep learning for symbolic mathematics. In Proc. of ICLR, 2020.
  36. Can homomorphic encryption be practical? Proceedings of the 3rd ACM workshop on Cloud computing security workshop, pp.  113–124, 2011.
  37. Teaching arithmetic to small transformers. arXiv preprint arXiv:2307.03381, 2023.
  38. Factoring polynomials with rational coefficients. Mathematische Annalen, 261:515–534, 1982.
  39. Salsa Picante: A Machine Learning Attack on LWE with Binary Secrets. In Proc. of ACM CCS, 2023a.
  40. SALSA VERDE: a machine learning attack on LWE with sparse small secrets. In Proc. of NeurIPS, 2023b.
  41. Jurassic-1: Technical details and evaluation. White Paper. AI21 Labs, 1:9, 2021.
  42. Towards understanding grokking: An effective theory of representation learning. Proc. of NeurIPS, 2022.
  43. Solving math word problems with double-decoder transformer. arXiv preprint arXiv:1908.10924, 2019.
  44. Faster exponential time algorithms for the shortest vector problem. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 2010.
  45. Investigating the limitations of transformers with simple arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021.
  46. Palamas, T. Investigating the ability of neural networks to learn simple modular arithmetic. 2017. https://project-archive.inf.ed.ac.uk/msc/20172390/msc_proj.pdf.
  47. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393, 2020.
  48. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets. arXiv preprint arXiv:2201.02177, 2022.
  49. Regev, O. On Lattices, Learning with Errors, Random Linear Codes, and Cryptography. In Proc. of the ACM Symposium on Theory of Computing, 2005.
  50. Fast practical lattice reduction through iterated compression. Cryptology ePrint Archive, 2023. URL https://eprint.iacr.org/2023/237.pdf.
  51. Schnorr, C.-P. A hierarchy of polynomial time lattice basis reduction algorithms. Theoretical Computer Science, 1987. URL https://www.sciencedirect.com/science/article/pii/0304397587900648.
  52. Optimal depth neural networks for multiplication and related problems. In Proc. of NeurIPS, 1992.
  53. So, J. Deep learning-based cryptanalysis of lightweight block ciphers. Security and Communication Networks, 2020.
  54. On the Learning Capabilities of Recurrent Neural Networks: A Cryptographic Perspective. In Proc. of ICBK, 2018.
  55. The FPLLL development team. fplll, a lattice reduction library, Version: 5.4.4. Available at https://github.com/fplll/fplll, 2023.
  56. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  57. Attention is all you need. In Proc. of NeurIPS, 2017.
  58. Salsa: Attacking lattice cryptography with transformers. In Proc. of NeurIPS, 2022.
  59. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
Citations (5)

Summary

We haven't generated a summary for this paper yet.