Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BeamAggR: Beam Aggregation Reasoning over Multi-source Knowledge for Multi-hop Question Answering (2406.19820v1)

Published 28 Jun 2024 in cs.CL and cs.AI

Abstract: LLMs have demonstrated strong reasoning capabilities. Nevertheless, they still suffer from factual errors when tackling knowledge-intensive tasks. Retrieval-augmented reasoning represents a promising approach. However, significant challenges still persist, including inaccurate and insufficient retrieval for complex questions, as well as difficulty in integrating multi-source knowledge. To address this, we propose Beam Aggregation Reasoning, BeamAggR, a reasoning framework for knowledge-intensive multi-hop QA. BeamAggR explores and prioritizes promising answers at each hop of question. Concretely, we parse the complex questions into trees, which include atom and composite questions, followed by bottom-up reasoning. For atomic questions, the LLM conducts reasoning on multi-source knowledge to get answer candidates. For composite questions, the LLM combines beam candidates, explores multiple reasoning paths through probabilistic aggregation, and prioritizes the most promising trajectory. Extensive experiments on four open-domain multi-hop reasoning datasets show that our method significantly outperforms SOTA methods by 8.5%. Furthermore, our analysis reveals that BeamAggR elicits better knowledge collaboration and answer aggregation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zheng Chu (49 papers)
  2. Jingchang Chen (10 papers)
  3. Qianglong Chen (25 papers)
  4. Haotian Wang (60 papers)
  5. Kun Zhu (39 papers)
  6. Xiyuan Du (5 papers)
  7. Weijiang Yu (23 papers)
  8. Ming Liu (421 papers)
  9. Bing Qin (186 papers)
Citations (2)