Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few-Shot Complex Knowledge Base Question Answering via Meta Reinforcement Learning (2010.15877v1)

Published 29 Oct 2020 in cs.CL and cs.AI

Abstract: Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB). However, the conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types, harboring inherently different characteristics, e.g., difficulty level. This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions. Our method quickly and effectively adapts the meta-learned programmer to new questions based on the most similar questions retrieved from the training data. The meta-learned policy is then used to learn a good programming policy, utilizing the trial trajectories and their rewards for similar questions in the support set. Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set. We have released our code at https://github.com/DevinJake/MRL-CQA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuncheng Hua (21 papers)
  2. Yuan-Fang Li (90 papers)
  3. Gholamreza Haffari (141 papers)
  4. Guilin Qi (60 papers)
  5. Tongtong Wu (26 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com