Leveraging Abstract Meaning Representation for Knowledge Base Question Answering (2012.01707v2)
Abstract: Knowledge base question answering (KBQA)is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity andrelationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.
- Pavan Kapanipathi (35 papers)
- Ibrahim Abdelaziz (38 papers)
- Srinivas Ravishankar (9 papers)
- Salim Roukos (41 papers)
- Alexander Gray (35 papers)
- Ramon Astudillo (8 papers)
- Maria Chang (14 papers)
- Cristina Cornelio (15 papers)
- Saswati Dana (6 papers)
- Achille Fokoue (25 papers)
- Dinesh Garg (20 papers)
- Alfio Gliozzo (28 papers)
- Sairam Gurajada (13 papers)
- Hima Karanam (8 papers)
- Naweed Khan (5 papers)
- Dinesh Khandelwal (13 papers)
- Young-Suk Lee (17 papers)
- Yunyao Li (43 papers)
- Francois Luus (7 papers)
- Ndivhuwo Makondo (9 papers)