Adversarially Trained End-to-end Korean Singing Voice Synthesis System
The paper "Adversarially Trained End-to-end Korean Singing Voice Synthesis System" proposes a novel architecture for Korean singing voice synthesis (SVS) capable of generating singing voices directly from lyrics and symbolic melodies. The approach introduces three innovative techniques: phonetic enhancement masking, local conditioning on text and pitch in the super-resolution network, and a conditional adversarial training strategy.
Overview of the Proposed System
The proposed framework is structured into two main modules: a mel-synthesis network and a super-resolution network. The mel-synthesis network is responsible for generating a mel-spectrogram from the input text and pitch information, while the super-resolution network upsamples this mel-spectrogram into a linear-spectrogram. This design choice obviates the need for vocoder feature prediction, which often limits synthesis quality in traditional SVS systems.
Key Contributions
- Phonetic Enhancement Masking: This method generates implicit formant masks from the text input, allowing the model to focus specifically on pronunciation features. The empirical results suggest that this results in more accurate phonetic representations.
- Local Conditioning and Adversarial Training: By locally conditioning the super-resolution network on text and pitch data, and employing a conditional adversarial training scheme, the system achieves a more realistic and higher-quality auditory output. The architecture leverages techniques such as projection discriminators and R1 regularization to stabilize the adversarial training process.
Experimental Validation
The authors validate their approach with a newly collected dataset composed of 60 Korean pop songs, with recordings manually aligned to the song lyrics and midi files. The dataset enabled the authors to evaluate the performance of their model through both quantitative and qualitative assessments.
Numerical Results
- F1-score: The best-performing model configuration achieved an F1-score of 0.846, indicating that the generated pitch closely matches the conditioned input pitch.
- The paper compares the pronunciation accuracy, sound quality, and naturalness of singing voice between different model configurations and finds noticeable improvements when all proposed methods are utilized together.
Implications and Future Scope
The introduction of phonetic enhancement masking and conditional adversarial training presents a significant refinement in SVS methodology. This research indicates potential across multiple applications, including more naturalistic synthetic vocals in consumer music production and enhanced speech synthesis applications. Future work could explore the generalization of these techniques to other languages and more complex prosodic features, as well as the integration of neural vocoder systems to further enhance audio quality.
In summary, the paper presents a comprehensive advancement in the field of singing voice synthesis, balancing the theoretical enhancements with practical implications and setting a platform for future innovations in the domain of artificial auditory generation systems.