Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Top k Memory Candidates in Memory Networks for Common Sense Reasoning (1801.04622v2)

Published 14 Jan 2018 in cs.AI

Abstract: Successful completion of reasoning task requires the agent to have relevant prior knowledge or some given context of the world dynamics. Usually, the information provided to the system for a reasoning task is just the query or some supporting story, which is often not enough for common reasoning tasks. The goal here is that, if the information provided along the question is not sufficient to correctly answer the question, the model should choose k most relevant documents that can aid its inference process. In this work, the model dynamically selects top k most relevant memory candidates that can be used to successfully solve reasoning tasks. Experiments were conducted on a subset of Winograd Schema Challenge (WSC) problems to show that the proposed model has the potential for commonsense reasoning. The WSC is a test of machine intelligence, designed to be an improvement on the Turing test.

Summary

We haven't generated a summary for this paper yet.