Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning (2206.03715v2)
Abstract: Commonsense reasoning systems should be able to generalize to diverse reasoning cases. However, most state-of-the-art approaches depend on expensive data annotations and overfit to a specific benchmark without learning how to perform general semantic reasoning. To overcome these drawbacks, zero-shot QA systems have shown promise as a robust learning scheme by transforming a commonsense knowledge graph (KG) into synthetic QA-form samples for model training. Considering the increasing type of different commonsense KGs, this paper aims to extend the zero-shot transfer learning scenario into multiple-source settings, where different KGs can be utilized synergetically. Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework. Results on five commonsense reasoning benchmarks demonstrate the efficacy of our framework, improving the performance with multiple KGs.
- Yu Jin Kim (4 papers)
- Beong-woo Kwak (12 papers)
- Youngwook Kim (30 papers)
- Reinald Kim Amplayo (28 papers)
- Seung-won Hwang (59 papers)
- Jinyoung Yeo (46 papers)