Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Temporal Spectrogram Autoencoder (CTSAE): Unsupervised Dimensionality Reduction for Clustering Gravitational Wave Glitches (2404.15552v1)

Published 23 Apr 2024 in cs.CV, astro-ph.IM, cs.LG, and gr-qc

Abstract: The advancement of The Laser Interferometer Gravitational-Wave Observatory (LIGO) has significantly enhanced the feasibility and reliability of gravitational wave detection. However, LIGO's high sensitivity makes it susceptible to transient noises known as glitches, which necessitate effective differentiation from real gravitational wave signals. Traditional approaches predominantly employ fully supervised or semi-supervised algorithms for the task of glitch classification and clustering. In the future task of identifying and classifying glitches across main and auxiliary channels, it is impractical to build a dataset with manually labeled ground-truth. In addition, the patterns of glitches can vary with time, generating new glitches without manual labels. In response to this challenge, we introduce the Cross-Temporal Spectrogram Autoencoder (CTSAE), a pioneering unsupervised method for the dimensionality reduction and clustering of gravitational wave glitches. CTSAE integrates a novel four-branch autoencoder with a hybrid of Convolutional Neural Networks (CNN) and Vision Transformers (ViT). To further extract features across multi-branches, we introduce a novel multi-branch fusion method using the CLS (Class) token. Our model, trained and evaluated on the GravitySpy O3 dataset on the main channel, demonstrates superior performance in clustering tasks when compared to state-of-the-art semi-supervised learning methods. To the best of our knowledge, CTSAE represents the first unsupervised approach tailored specifically for clustering LIGO data, marking a significant step forward in the field of gravitational wave research. The code of this paper is available at https://github.com/Zod-L/CTSAE

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Advanced ligo. Classical and Quantum Gravity, 32(7):074001, 2015.
  2. Characterization of transient noise in advanced ligo relevant to gravitational wave signal gw150914. Classical and Quantum Gravity, 33(13):134001, 2016a.
  3. Observation of gravitational waves from a binary black hole merger. Physical review letters, 116(6):061102, 2016b.
  4. A guide to ligo–virgo detector noise and extraction of transient gravitational-wave signals. Classical and Quantum Gravity, 37(5):055002, 2020.
  5. Gspynettree: A signal-vs-glitch classifier for gravitational-wave event candidates. Classical and Quantum Gravity, 2023.
  6. Direct: Deep discriminative embedding for clustering of ligo data. In 2018 25th ieee international conference on image processing (icip), pages 748–752. IEEE, 2018.
  7. Discriminative dimensionality reduction using deep neural networks for clustering of ligo data. arXiv preprint arXiv:2205.13672, 2022.
  8. Generative pretraining from pixels. In International conference on machine learning, pages 1691–1703. PMLR, 2020a.
  9. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020b.
  10. Mobile-former: Bridging mobilenet and transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5270–5279, 2022.
  11. Classifying the unknown: discovering novel gravitational-wave detector glitches using similarity learning. Physical Review D, 99(8):082002, 2019.
  12. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  13. Convit: Improving vision transformers with soft convolutional inductive biases. In International Conference on Machine Learning, pages 2286–2296. PMLR, 2021.
  14. Levit: A vision transformer in convnet’s clothing for faster inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 12259–12269, 2021.
  15. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), pages 1735–1742. IEEE, 2006.
  16. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  17. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000–16009, 2022.
  18. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
  19. A new method to distinguish gravitational-wave signals from detector noise transients with gravity spy. arXiv preprint arXiv:2307.15867, 2023.
  20. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  21. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536–2544, 2016.
  22. Conformer: Local features coupling global representations for visual recognition. In Proceedings of the IEEE/CVF international conference on computer vision, pages 367–376, 2021.
  23. Learning internal representations by error propagation, 1985.
  24. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  25. Bottleneck transformers for visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16519–16529, 2021.
  26. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103, 2008.
  27. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(12), 2010.
  28. Advancing glitch classification in gravity spy: Multi-view fusion with attention-based machine learning for advanced ligo’s fourth observing run. arXiv preprint arXiv:2401.12913, 2024.
  29. Gravity spy volunteer classifications of LIGO glitches from observing runs O1, O2, O3a, and O3b. 2022.
  30. Gravity spy: lessons learned and a path forward. The European Physical Journal Plus, 139(1):100, 2024.
  31. Colorful image colorization. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14, pages 649–666. Springer, 2016.

Summary

We haven't generated a summary for this paper yet.