Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Iterative Alternating Neural Attention for Machine Reading (1606.02245v4)

Published 7 Jun 2016 in cs.CL and cs.NE

Abstract: We propose a novel neural attention architecture to tackle machine comprehension tasks, such as answering Cloze-style queries with respect to a document. Unlike previous models, we do not collapse the query into a single vector, instead we deploy an iterative alternating attention mechanism that allows a fine-grained exploration of both the query and the document. Our model outperforms state-of-the-art baselines in standard machine comprehension benchmarks such as CNN news articles and the Children's Book Test (CBT) dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alessandro Sordoni (53 papers)
  2. Philip Bachman (25 papers)
  3. Adam Trischler (50 papers)
  4. Yoshua Bengio (601 papers)
Citations (117)

Summary

We haven't generated a summary for this paper yet.