Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Hard Alignments with Variational Inference (1705.05524v2)

Published 16 May 2017 in cs.AI, cs.LG, and stat.ML

Abstract: There has recently been significant interest in hard attention models for tasks such as object recognition, visual captioning and speech recognition. Hard attention can offer benefits over soft attention such as decreased computational cost, but training hard attention models can be difficult because of the discrete latent variables they introduce. Previous work used REINFORCE and Q-learning to approach these issues, but those methods can provide high-variance gradient estimates and be slow to train. In this paper, we tackle the problem of learning hard attention for a sequential task using variational inference methods, specifically the recently introduced VIMCO and NVIL. Furthermore, we propose a novel baseline that adapts VIMCO to this setting. We demonstrate our method on a phoneme recognition task in clean and noisy environments and show that our method outperforms REINFORCE, with the difference being greater for a more complicated task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Dieterich Lawson (12 papers)
  2. Chung-Cheng Chiu (48 papers)
  3. George Tucker (45 papers)
  4. Colin Raffel (83 papers)
  5. Kevin Swersky (51 papers)
  6. Navdeep Jaitly (67 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.