Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discrete transforms of quantized persistence diagrams (2312.17093v3)

Published 28 Dec 2023 in math.AT, cs.CG, and stat.ML

Abstract: Topological data analysis leverages topological features to analyze datasets, with applications in diverse fields like medical sciences and biology. A key tool of this theory is the persistence diagram, which encodes topological information but poses challenges for integration into standard machine learning pipelines. We introduce Qupid (QUantized Persistence and Integral transforms of Diagrams), a novel and simple method for vectorizing persistence diagrams. First, Qupid uses a binning procedure to turn persistence diagrams into finite measures on a grid and then applies discrete transforms to these measures. Key features are the choice of log-scaled grids that emphasize information contained near the diagonal in persistence diagrams, combined with the use of discrete transforms to enhance and efficiently encode the obtained topological information. We conduct an in-depth experimental analysis of Qupid, showing that the simplicity of our method results in very low computational costs while preserving highly competitive performances compared to state-of-the-art methods across numerous classification tasks on both synthetic and real-world datasets. Finally, we provide experimental evidence that our method is robust to a decrease in the grid resolution used.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Topological data analysis of functional mri connectivity in time and space domains. Connectomics in neuroImaging : second international workshop, CNI 2018, held in conjunction with MICCAI 2018, Granada, Spain, September 20, 2018 : proceedings. CNI (Workshop), 11083:67–77, 2018. URL https://api.semanticscholar.org/CorpusID:52287547.
  2. A topological paradigm for hippocampal spatial map formation using persistent homology. PLoS Computational Biology, 8(8):e1002581, 2012. doi:10.1371/journal.pcbi.1002581. URL https://doi.org/10.1371/journal.pcbi.1002581. Epub 2012 Aug 9.
  3. Climate Networks Based on Phase Synchronization Analysis Track El-Niño. Progress of Theoretical Physics Supplement, 179:178–188, January 2009. doi:10.1143/PTPS.179.178.
  4. Persistent homology analysis of phase transitions. Physical Review E, 93(5):052138, 2016. doi:10.1103/PhysRevE.93.052138. URL https://link.aps.org/doi/10.1103/PhysRevE.93.052138.
  5. Persistent homology - a survey. Contemporary Mathematics, 453:257–282, 2008.
  6. Computing persistent homology. Discrete and Computational Geometry, 33(2):249–274, 2005. doi:10.1007/s00454-004-1146-y. URL https://www.mendeley.com/catalogue/337da92b-4e42-38b2-b353-1bfa55eb1b69/.
  7. Embeddings of persistence diagrams into hilbert spaces. Journal of Applied and Computational Topology, 4(3):353–385, 2020. doi:10.1007/s41468-020-00056-w. URL https://link.springer.com/article/10.1007/s41468-020-00056-w.
  8. Probability measures on the space of persistence diagrams. Inverse Problems, 27(12):124007, nov 2011. doi:10.1088/0266-5611/27/12/124007. URL https://dx.doi.org/10.1088/0266-5611/27/12/124007.
  9. Understanding the topology and the geometry of the space of persistence diagrams via optimal partial transport. arXiv preprint arXiv:1901.03048, 2019.
  10. Persistence images: a stable vector representation of persistent homology. Journal of Machine Learning Research, 18(8):1–35, 2017.
  11. Peter Bubenik. Statistical topological data analysis using persistence landscapes. Journal of Machine Learning Research, 16(3):77–102, 2015.
  12. Approximating continuous functions on persistence diagrams using template functions. arXiv preprint arXiv:1902.07190, 2019.
  13. Confidence sets for persistence diagrams. Annals of Statistics, 42(6):2301–2339, 2014.
  14. Hubert Wagner. Nonembeddability of persistence diagrams with p>2𝑝2p>2italic_p > 2 wasserstein metric. arXiv preprint arXiv:1910.13935, 2019.
  15. Multiparameter persistence image for topological machine learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 22432–22444. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/fdff71fcab656abfbefaabecab1a7f6d-Paper.pdf.
  16. Oliver Vipond. Multiparameter persistence landscapes. Journal of Machine Learning Research, 21(61):1–38, 2020. URL http://jmlr.org/papers/v21/19-054.html.
  17. Atol: Measure vectorization for automatic topologically-oriented learning. In Arindam Banerjee and Kenji Fukumizu, editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 1000–1008. PMLR, 13–15 Apr 2021. URL https://proceedings.mlr.press/v130/royer21a.html.
  18. Perslay: A neural network layer for persistence diagrams and new graph topological signatures. In Silvia Chiappa and Roberto Calandra, editors, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 2786–2796. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/v108/carriere20a.html.
  19. A stable multi-scale kernel for topological machine learning. arXiv preprint arXiv:1412.6821, 2014.
  20. Persistence weighted gaussian kernel for topological data analysis. In International Conference on Machine Learning, volume 48, pages 2004–2013, Jun 2016.
  21. Sliced wasserstein kernel for persistence diagrams. In International Conference on Machine Learning, volume 70, pages 664–673, Jul 2017.
  22. Persistent Homology. CRC Press, 3 edition, 2017. To appear.
  23. Topological persistence and simplification. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages 454–463, 2000.
  24. Approximating persistent homology for large datasets. arXiv preprint arXiv:2204.09155, 2022.
  25. The structure and stability of persistence modules. arXiv preprint arXiv:1207.3674, Mar 2013.
  26. A new transportation distance between non-negative measures, with applications to gradients flows with dirichlet boundary conditions. Journal de Mathématiques Pures et Appliquées, 94(2):107–130, 2010.
  27. Discrete-Time Signal Processing. Prentice Hall Press, USA, 3rd edition, 2009. ISBN 0131988425.
  28. Peter Lempel Søndergaard. Finite Discrete Gabor Analysis. PhD thesis, May 2007.
  29. An algorithm for the machine calculation of complex fourier series. Mathematics of computation, 19(90):297–301, 1965.
  30. Mark J. Shensa. The discrete wavelet transform: wedding the a trous and mallat algorithms. IEEE Transactions on Signal Processing, 40(10):2464–2482, 1992. doi:10.1109/78.157290.
  31. Stable and informative spectral signatures for graph matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2305–2312, 2014.
  32. Scale-variant topological information for characterizing complex networks. arXiv preprint arXiv:1811.03573, 2018.
  33. Olympio Hacquard. Statistical learning on measures: an application to persistence diagrams. arXiv preprint arXiv:2303.08456, 2023.
  34. Retgk: Graph kernels based on return probabilities of random walks. In Advances in Neural Information Processing Systems, pages 3968–3978, 2018.
  35. Hunt for the unique, stable, sparse and fast feature learning on graphs. In Advances in Neural Information Processing Systems, pages 88–98, 2017.
  36. Capsule graph neural network. In International Conference on Learning Representations, 2019.
  37. How powerful are graph neural networks? In International Conference on Learning Representations (ICLR), 2019.
  38. Persistence fisher kernel: A riemannian manifold kernel for persistence diagrams. In Advances in Neural Information Processing Systems, pages 10027–10038, 2018.
  39. A stable multi-scale kernel for topological machine learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  40. Persformer: A transformer architecture for topological machine learning. CoRR, abs/2112.15210, 2021. URL https://arxiv.org/abs/2112.15210.
Citations (1)

Summary

We haven't generated a summary for this paper yet.