Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Integration of Background Knowledge in Neural NLU Systems (1706.02596v3)

Published 8 Jun 2017 in cs.CL, cs.AI, and cs.NE

Abstract: Common-sense and background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, this knowledge must be acquired from training corpora during learning, and then it is static at test time. We introduce a new architecture for the dynamic integration of explicit background knowledge in NLU models. A general-purpose reading module reads background knowledge in the form of free-text statements (together with task-specific text inputs) and yields refined word representations to a task-specific NLU architecture that reprocesses the task inputs with these representations. Experiments on document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of the approach. Analysis shows that our model learns to exploit knowledge in a semantically appropriate way.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dirk Weissenborn (17 papers)
  2. Chris Dyer (91 papers)
  3. Tomáš Kočiský (12 papers)
Citations (61)

Summary

We haven't generated a summary for this paper yet.