StyleCap: Automatic Speaking-Style Captioning from Speech Based on Speech and Language Self-supervised Learning Models (2311.16509v2)
Abstract: We propose StyleCap, a method to generate natural language descriptions of speaking styles appearing in speech. Although most of conventional techniques for para-/non-linguistic information recognition focus on the category classification or the intensity estimation of pre-defined labels, they cannot provide the reasoning of the recognition result in an interpretable manner. StyleCap is a first step towards an end-to-end method for generating speaking-style prompts from speech, i.e., automatic speaking-style captioning. StyleCap is trained with paired data of speech and natural language descriptions. We train neural networks that convert a speech representation vector into prefix vectors that are fed into a LLM-based text decoder. We explore an appropriate text decoder and speech feature representation suitable for this new task. The experimental results demonstrate that our StyleCap leveraging richer LLMs for the text decoder, speech self-supervised learning (SSL) features, and sentence rephrasing augmentation improves the accuracy and diversity of generated speaking-style captions. Samples of speaking-style captions generated by our StyleCap are publicly available.
- “Mapping 24 emotions conveyed by brief human vocalization” In American Psychologist 74.6, 2019, pp. 698–712
- “Attention is all you need” In Proc. NIPS, 2017
- “Universal Paralinguistic Speech Representations Using self-Supervised Conformers” In Proc. ICASSP, 2022, pp. 3169–3173
- “Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers” In Speech Communication 116, 2020, pp. 56–76
- “Speaker recognition based on deep learning: An overview” In Neural Networks 140, 2021, pp. 65–99
- D.M. Low, K.H. Bentley and S.S. Ghosh “Automated assessment of psychiatric disorders using speech: A systematic review” In Laryngoscope Investigative Otolaryngology 5.1, 2020, pp. 96–116
- “EmoDiff: Intensity Controllable Emotional Text-to-Speech with Soft-Label Guidance” In Proc. ICASSP, 2023
- “Emotion Intensity and its Control for Emotional Voice Conversion” In IEEE Transactions on Affective Computing 14.1, 2023, pp. 31–48
- Subrato Bharati, M.Rubaiyat Hossain Mondal and Prajoy Podder “A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?” In IEEE Transactions on Artificial Intelligence, 2023, pp. 1–15
- “Describing emotions with acoustic property prompts for speech emotion recognition” In arXiv preprint arXiv:2211.07737, 2022
- “GEmo-CLAP: Gender-Attribute-Enhanced Contrastive Language-Audio Pretraining for Speech Emotion Recognition” In arXiv preprint arXiv: 2306.07848, 2023
- “Automated audio captioning: an overview of recent progress and new challenges” In EURASIP Journal on Audio, Speech, and Music Processing 2022.1, 2022, pp. 1–18
- “Deep Image Captioning: A Review of Methods, Trends and Future Challenges” In Neurocomputing 546, 2023, pp. 126287
- “Learning transferable visual models from natural language supervision” In Proc. ICML, 2021, pp. 8748–8763
- “CLAP: Learning audio concepts from natural language supervision” In Proc. ICASSP, 2023
- “Language models are few-shot learners” In Proc. NeurIPS, 2020
- Shashidhar G Koolagudi and K Sreenivasa Rao “Emotion recognition from speech: a review” In International journal of speech technology 15, 2012, pp. 99–117
- Ron Mokady, Amir Hertz and Amit H. Bermano “ClipCap: CLIP Prefix for Image Captioning” In arXiv preprint arXiv: 2111.09734, 2021
- “Llama 2: Open Foundation and Fine-Tuned Chat Models” In arXiv preprint arXiv:2307.09288, 2023
- “BERTScore: Evaluating Text Generation with BERT” In Proc. ICLR, 2020
- “PromptTTS: Controllable Text-to-Speech with Text Descriptions” In Proc. ICASSP, 2023
- “LibriTTS: A corpus derived from LibriSpeech for text-to-speech” In Proc. INTERSPEECH, 2019, pp. 1526–1530
- “X-Vectors: Robust DNN Embeddings for Speaker Recognition” In Proc. ICASSP, 2018, pp. 5329–5333
- “Language Models are Unsupervised Multitask Learners”, 2019
- “BLEU: A method for automatic evaluation of machine translation” In Proc. ACL, 2002, pp. 311–318
- C.-Y. Lin “ROUGE: A package for automatic evaluation of summaries” In Proc. Workshop on Text Summarization Branches Out, 2004, pp. 74–81
- “METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments” In Proc. ACL, 2005, pp. 65–72
- R. Vedantam, C.Lawrence Zitnick and D. Parikh “CIDEr: Consensus-based image description evaluation” In Proc. CVPR, 2015, pp. 4566–4575
- “SPICE: Semantic propositional image caption evaluation” In Proc. ECCV, 2016, pp. 382–398
- “A Diversity Promoting Objective Function for Neural Conversation Models” In Proc. ACL, 2016, pp. 110–119
- “Zero-Shot Text-to-Speech Synthesis Conditioned Using Self-Supervised Speech Representation Model” In Proc. ICASSP SASB Workshop, 2023
- “G-Eval: NLG evaluation using GPT-4 with better human alignment” In arXiv preprint arXiv:2303.16634, 2023
- Kazuki Yamauchi (3 papers)
- Yusuke Ijima (11 papers)
- Yuki Saito (47 papers)