Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hierarchical Memory Networks (1605.07427v1)

Published 24 May 2016 in stat.ML, cs.CL, cs.LG, and cs.NE

Abstract: Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sarath Chandar (93 papers)
  2. Sungjin Ahn (51 papers)
  3. Hugo Larochelle (87 papers)
  4. Pascal Vincent (78 papers)
  5. Gerald Tesauro (29 papers)
  6. Yoshua Bengio (601 papers)
Citations (83)

Summary

We haven't generated a summary for this paper yet.