Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Acoustic Cues and Paralinguistic Embeddings to Detect Expression from Voice (1907.00112v1)

Published 28 Jun 2019 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: Millions of people reach out to digital assistants such as Siri every day, asking for information, making phone calls, seeking assistance, and much more. The expectation is that such assistants should understand the intent of the users query. Detecting the intent of a query from a short, isolated utterance is a difficult task. Intent cannot always be obtained from speech-recognized transcriptions. A transcription driven approach can interpret what has been said but fails to acknowledge how it has been said, and as a consequence, may ignore the expression present in the voice. Our work investigates whether a system can reliably detect vocal expression in queries using acoustic and paralinguistic embedding. Results show that the proposed method offers a relative equal error rate (EER) decrease of 60% compared to a bag-of-word based system, corroborating that expression is significantly represented by vocal attributes, rather than being purely lexical. Addition of emotion embedding helped to reduce the EER by 30% relative to the acoustic embedding, demonstrating the relevance of emotion in expressive voice.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Vikramjit Mitra (20 papers)
  2. Sue Booker (2 papers)
  3. Erik Marchi (18 papers)
  4. David Scott Farrar (1 paper)
  5. Ute Dorothea Peitz (1 paper)
  6. Bridget Cheng (1 paper)
  7. Ermine Teves (1 paper)
  8. Anuj Mehta (1 paper)
  9. Devang Naik (26 papers)
Citations (13)