Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning with Pre-training for Time-efficient Training of Automatic Speech Recognition (2005.11172v1)

Published 21 May 2020 in eess.AS and cs.SD

Abstract: Deep reinforcement learning (deep RL) is a combination of deep learning with reinforcement learning principles to create efficient methods that can learn by interacting with its environment. This has led to breakthroughs in many complex tasks, such as playing the game "Go", that were previously difficult to solve. However, deep RL requires significant training time making it difficult to use in various real-life applications such as Human-Computer Interaction (HCI). In this paper, we study pre-training in deep RL to reduce the training time and improve the performance of Speech Recognition, a popular application of HCI. To evaluate the performance improvement in training we use the publicly available "Speech Command" dataset, which contains utterances of 30 command keywords spoken by 2,618 speakers. Results show that pre-training with deep RL offers faster convergence compared to non-pre-trained RL while achieving improved speech recognition accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Thejan Rajapakshe (8 papers)
  2. Siddique Latif (38 papers)
  3. Rajib Rana (52 papers)
  4. Sara Khalifa (21 papers)
  5. Björn W. Schuller (153 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.