Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Learning in ASR: Not as Easy as You Think (2109.15108v1)

Published 30 Sep 2021 in eess.AS and cs.SD

Abstract: With the growing availability of smart devices and cloud services, personal speech assistance systems are increasingly used on a daily basis. Most devices redirect the voice recordings to a central server, which uses them for upgrading the recognizer model. This leads to major privacy concerns, since private data could be misused by the server or third parties. Federated learning is a decentralized optimization strategy that has been proposed to address such concerns. Utilizing this approach, private data is used for on-device training. Afterwards, updated model parameters are sent to the server to improve the global model, which is redistributed to the clients. In this work, we implement federated learning for speech recognition in a hybrid and an end-to-end model. We discuss the outcomes of these systems, which both show great similarities and only small improvements, pointing to a need for a deeper understanding of federated learning for speech recognition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wentao Yu (28 papers)
  2. Jan Freiwald (1 paper)
  3. Sören Tewes (1 paper)
  4. Fabien Huennemeyer (1 paper)
  5. Dorothea Kolossa (33 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.