Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semantic Sentence Composition Reasoning for Multi-Hop Question Answering (2203.00160v1)

Published 1 Mar 2022 in cs.CL, cs.AI, and cs.IR

Abstract: Due to the lack of insufficient data, existing multi-hop open domain question answering systems require to effectively find out relevant supporting facts according to each question. To alleviate the challenges of semantic factual sentences retrieval and multi-hop context expansion, we present a semantic sentence composition reasoning approach for a multi-hop question answering task, which consists of two key modules: a multi-stage semantic matching module (MSSM) and a factual sentence composition module (FSC). With the combination of factual sentences and multi-stage semantic retrieval, our approach can provide more comprehensive contextual information for model training and reasoning. Experimental results demonstrate our model is able to incorporate existing pre-trained LLMs and outperform the existing SOTA method on the QASC task with an improvement of about 9%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Qianglong Chen (25 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.