Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning (2206.03715v2)

Published 8 Jun 2022 in cs.AI, cs.CL, and cs.LG

Abstract: Commonsense reasoning systems should be able to generalize to diverse reasoning cases. However, most state-of-the-art approaches depend on expensive data annotations and overfit to a specific benchmark without learning how to perform general semantic reasoning. To overcome these drawbacks, zero-shot QA systems have shown promise as a robust learning scheme by transforming a commonsense knowledge graph (KG) into synthetic QA-form samples for model training. Considering the increasing type of different commonsense KGs, this paper aims to extend the zero-shot transfer learning scenario into multiple-source settings, where different KGs can be utilized synergetically. Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework. Results on five commonsense reasoning benchmarks demonstrate the efficacy of our framework, improving the performance with multiple KGs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yu Jin Kim (4 papers)
  2. Beong-woo Kwak (12 papers)
  3. Youngwook Kim (30 papers)
  4. Reinald Kim Amplayo (28 papers)
  5. Seung-won Hwang (59 papers)
  6. Jinyoung Yeo (46 papers)
Citations (12)