Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Prime Number Classification: Achieving High Recall Rate and Rapid Convergence with Sparse Encoding (2402.03363v2)

Published 30 Jan 2024 in math.NT and cs.LG

Abstract: This paper presents a novel approach at the intersection of machine learning and number theory, focusing on the classification of prime and non-prime numbers. At the core of our research is the development of a highly sparse encoding method, integrated with conventional neural network architectures. This combination has shown promising results, achieving a recall of over 99\% in identifying prime numbers and 79\% for non-prime numbers from an inherently imbalanced sequential series of integers, while exhibiting rapid model convergence before the completion of a single training epoch. We performed training using $106$ integers starting from a specified integer and tested on a different range of $2 \times 106$ integers extending from $106$ to $3 \times 106$, offset by the same starting integer. While constrained by the memory capacity of our resources, which limited our analysis to a span of $3\times106$, we believe that our study contribute to the application of machine learning in prime number analysis. This work aims to demonstrate the potential of such applications and hopes to inspire further exploration and possibilities in diverse fields.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311–4322, 2006.
  2. The importance of encoding versus training with sparse coding and vector quantization. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pp.  921–928, Bellevue, Washington, USA, 2011.
  3. Advancing mathematics by guiding human intuition with ai. Nature, 600:70–74, 2021.
  4. The primes contain arbitrarily long arithmetic progressions. Annals of Mathematics, 167:481–547, 2008.
  5. Deep residual learning for image recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp.  770–778. IEEE, June 2016.
  6. Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors. Nature Machine Intelligence, 3:675–686, 2021.
  7. Highly accurate protein structure prediction with alphafold. Nature, 596:583–589, 2021a.
  8. Applying and improving alphafold at casp14. Proteins, 89:1711–1721, 2021b.
  9. Highly accurate protein structure prediction for the human proteome. Nature, 596:590–596, 2021c.
  10. Keltner, L. https://github.com/lukekeltner/primes-and-ml?search=1, 2018.
  11. Machine learning approach to integer prime factorisation, 2022.
  12. A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21:120–126, 1978.
  13. Goldbach’s function approximation using deep learning. In IEEE/WIC/ACM International Conference on Web Intelligence (WI), Santiago, Chile, 2018.
  14. Variational sparse coding. In Adams, R. P. and Gogate, V. (eds.), Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings of Machine Learning Research, pp.  690–700. PMLR, 22–25 Jul 2020.
  15. Attention is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
  16. On the prime number divisibility by deep learning. CoRR, abs/2304.01333, 2023.

Summary

We haven't generated a summary for this paper yet.