Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StyleCap: Automatic Speaking-Style Captioning from Speech Based on Speech and Language Self-supervised Learning Models (2311.16509v2)

Published 28 Nov 2023 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: We propose StyleCap, a method to generate natural language descriptions of speaking styles appearing in speech. Although most of conventional techniques for para-/non-linguistic information recognition focus on the category classification or the intensity estimation of pre-defined labels, they cannot provide the reasoning of the recognition result in an interpretable manner. StyleCap is a first step towards an end-to-end method for generating speaking-style prompts from speech, i.e., automatic speaking-style captioning. StyleCap is trained with paired data of speech and natural language descriptions. We train neural networks that convert a speech representation vector into prefix vectors that are fed into a LLM-based text decoder. We explore an appropriate text decoder and speech feature representation suitable for this new task. The experimental results demonstrate that our StyleCap leveraging richer LLMs for the text decoder, speech self-supervised learning (SSL) features, and sentence rephrasing augmentation improves the accuracy and diversity of generated speaking-style captions. Samples of speaking-style captions generated by our StyleCap are publicly available.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. “Mapping 24 emotions conveyed by brief human vocalization” In American Psychologist 74.6, 2019, pp. 698–712
  2. “Attention is all you need” In Proc. NIPS, 2017
  3. “Universal Paralinguistic Speech Representations Using self-Supervised Conformers” In Proc. ICASSP, 2022, pp. 3169–3173
  4. “Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers” In Speech Communication 116, 2020, pp. 56–76
  5. “Speaker recognition based on deep learning: An overview” In Neural Networks 140, 2021, pp. 65–99
  6. D.M. Low, K.H. Bentley and S.S. Ghosh “Automated assessment of psychiatric disorders using speech: A systematic review” In Laryngoscope Investigative Otolaryngology 5.1, 2020, pp. 96–116
  7. “EmoDiff: Intensity Controllable Emotional Text-to-Speech with Soft-Label Guidance” In Proc. ICASSP, 2023
  8. “Emotion Intensity and its Control for Emotional Voice Conversion” In IEEE Transactions on Affective Computing 14.1, 2023, pp. 31–48
  9. Subrato Bharati, M.Rubaiyat Hossain Mondal and Prajoy Podder “A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?” In IEEE Transactions on Artificial Intelligence, 2023, pp. 1–15
  10. “Describing emotions with acoustic property prompts for speech emotion recognition” In arXiv preprint arXiv:2211.07737, 2022
  11. “GEmo-CLAP: Gender-Attribute-Enhanced Contrastive Language-Audio Pretraining for Speech Emotion Recognition” In arXiv preprint arXiv: 2306.07848, 2023
  12. “Automated audio captioning: an overview of recent progress and new challenges” In EURASIP Journal on Audio, Speech, and Music Processing 2022.1, 2022, pp. 1–18
  13. “Deep Image Captioning: A Review of Methods, Trends and Future Challenges” In Neurocomputing 546, 2023, pp. 126287
  14. “Learning transferable visual models from natural language supervision” In Proc. ICML, 2021, pp. 8748–8763
  15. “CLAP: Learning audio concepts from natural language supervision” In Proc. ICASSP, 2023
  16. “Language models are few-shot learners” In Proc. NeurIPS, 2020
  17. Shashidhar G Koolagudi and K Sreenivasa Rao “Emotion recognition from speech: a review” In International journal of speech technology 15, 2012, pp. 99–117
  18. Ron Mokady, Amir Hertz and Amit H. Bermano “ClipCap: CLIP Prefix for Image Captioning” In arXiv preprint arXiv: 2111.09734, 2021
  19. “Llama 2: Open Foundation and Fine-Tuned Chat Models” In arXiv preprint arXiv:2307.09288, 2023
  20. “BERTScore: Evaluating Text Generation with BERT” In Proc. ICLR, 2020
  21. “PromptTTS: Controllable Text-to-Speech with Text Descriptions” In Proc. ICASSP, 2023
  22. “LibriTTS: A corpus derived from LibriSpeech for text-to-speech” In Proc. INTERSPEECH, 2019, pp. 1526–1530
  23. “X-Vectors: Robust DNN Embeddings for Speaker Recognition” In Proc. ICASSP, 2018, pp. 5329–5333
  24. “Language Models are Unsupervised Multitask Learners”, 2019
  25. “BLEU: A method for automatic evaluation of machine translation” In Proc. ACL, 2002, pp. 311–318
  26. C.-Y. Lin “ROUGE: A package for automatic evaluation of summaries” In Proc. Workshop on Text Summarization Branches Out, 2004, pp. 74–81
  27. “METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments” In Proc. ACL, 2005, pp. 65–72
  28. R. Vedantam, C.Lawrence Zitnick and D. Parikh “CIDEr: Consensus-based image description evaluation” In Proc. CVPR, 2015, pp. 4566–4575
  29. “SPICE: Semantic propositional image caption evaluation” In Proc. ECCV, 2016, pp. 382–398
  30. “A Diversity Promoting Objective Function for Neural Conversation Models” In Proc. ACL, 2016, pp. 110–119
  31. “Zero-Shot Text-to-Speech Synthesis Conditioned Using Self-Supervised Speech Representation Model” In Proc. ICASSP SASB Workshop, 2023
  32. “G-Eval: NLG evaluation using GPT-4 with better human alignment” In arXiv preprint arXiv:2303.16634, 2023
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Kazuki Yamauchi (3 papers)
  2. Yusuke Ijima (11 papers)
  3. Yuki Saito (47 papers)
Citations (5)