Multi-perspective Information Fusion Res2Net with RandomSpecmix for Fake Speech Detection (2306.15389v1)
Abstract: In this paper, we propose the multi-perspective information fusion (MPIF) Res2Net with random Specmix for fake speech detection (FSD). The main purpose of this system is to improve the model's ability to learn precise forgery information for FSD task in low-quality scenarios. The task of random Specmix, a data augmentation, is to improve the generalization ability of the model and enhance the model's ability to locate discriminative information. Specmix cuts and pastes the frequency dimension information of the spectrogram in the same batch of samples without introducing other data, which helps the model to locate the really useful information. At the same time, we randomly select samples for augmentation to reduce the impact of data augmentation directly changing all the data. Once the purpose of helping the model to locate information is achieved, it is also important to reduce unnecessary information. The role of MPIF-Res2Net is to reduce redundant interference information. Deceptive information from a single perspective is always similar, so the model learning this similar information will produce redundant spoofing clues and interfere with truly discriminative information. The proposed MPIF-Res2Net fuses information from different perspectives, making the information learned by the model more diverse, thereby reducing the redundancy caused by similar information and avoiding interference with the learning of discriminative information. The results on the ASVspoof 2021 LA dataset demonstrate the effectiveness of our proposed method, achieving EER and min-tDCF of 3.29% and 0.2557, respectively.
- X. Li, X. Wu, H. Lu, X. Liu, and H. Meng, “Channel-wise gated res2net:Towards robust detection of synthetic speech attacks,” Proc. Interspeech 2021, 2021.
- A. Tomilov, A. Svishchev, M. Volkova, A. Chirkovskiy, A. Kondratev,and G. Lavrentyeva, “STC Antispoofing Systems for the ASVspoof2021 Challenge,” in Proc. 2021 Edition of the Automatic Speaker Verification and Spoofing Countermeasures Challenge, 2021, pp. 61–67.
- T. Chen, E. Khoury, K. Phatak, and G. Sivaraman, “Pindrop Labs’ Submission to the ASVspoof 2021 Challenge,” in Proc. 2021 Edition of the Automatic Speaker Verification and Spoofing Countermeasures Challenge, 2021, pp. 89–93.
- J. C´aceres, R. Font, T. Grau, and J. Molina, “The Biometric Vox System for the ASVspoof 2021 Challenge,” in Proc. 2021 Edition of the Automatic Speaker Verification and Spoofing Countermeasures Challenge, 2021, pp. 68–74.
- W. H. Kang, J. Alam, and A. Fathan, “CRIM’s System Description for the ASVSpoof2021 Challenge,” in Proc. 2021 Edition of the Automatic Speaker Verification and Spoofing Countermeasures Challenge, 2021 pp. 100–106.
- A. Cohen, I. Rimon, E. Aflalo, and H. H. Permuter, “A study on data augmentation in voice anti-spoofing,” Speech Communication, vol. 141, pp. 56–67, 2022.
- X. Wang, X. Qin, T. Zhu, C. Wang, S. Zhang, and M. Li, “The dku-cmri system for the asvspoof 2021 challenge: vocoder based replay channel response estimation,” Proc. 2021 Edition of the Automatic Speaker Verification and Spoofing Countermeasures Challenge, pp. 16–21, 2021.
- J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.