Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Jointly Learning to Align and Convert Graphemes to Phonemes with Neural Attention Models (1610.06540v1)

Published 20 Oct 2016 in cs.CL and cs.AI

Abstract: We propose an attention-enabled encoder-decoder model for the problem of grapheme-to-phoneme conversion. Most previous work has tackled the problem via joint sequence models that require explicit alignments for training. In contrast, the attention-enabled encoder-decoder model allows for jointly learning to align and convert characters to phonemes. We explore different types of attention models, including global and local attention, and our best models achieve state-of-the-art results on three standard data sets (CMUDict, Pronlex, and NetTalk).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Shubham Toshniwal (25 papers)
  2. Karen Livescu (89 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.