Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning (2004.03070v2)

Published 7 Apr 2020 in cs.CL and cs.AI

Abstract: We study the problem of generating inferential texts of events for a variety of commonsense like \textit{if-else} relations. Existing approaches typically use limited evidence from training examples and learn for each relation individually. In this work, we use multiple knowledge sources as fuels for the model. Existing commonsense knowledge bases like ConceptNet are dominated by taxonomic knowledge (e.g., \textit{isA} and \textit{relatedTo} relations), having a limited number of inferential knowledge. We use not only structured commonsense knowledge bases, but also natural language snippets from search-engine results. These sources are incorporated into a generative base model via key-value memory network. In addition, we introduce a meta-learning based multi-task learning algorithm. For each targeted commonsense relation, we regard the learning of examples from other relations as the meta-training process, and the evaluation on examples from the targeted relation as the meta-test process. We conduct experiments on Event2Mind and ATOMIC datasets. Results show that both the integration of multiple knowledge sources and the use of the meta-learning algorithm improve the performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Daya Guo (37 papers)
  2. Akari Asai (35 papers)
  3. Duyu Tang (65 papers)
  4. Nan Duan (172 papers)
  5. Ming Gong (246 papers)
  6. Linjun Shou (53 papers)
  7. Daxin Jiang (138 papers)
  8. Jian Yin (67 papers)
  9. Ming Zhou (182 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.