Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Analysis of Multiple-Turn Reasoning Strategies in Reading Comprehension Tasks (1711.03230v1)

Published 9 Nov 2017 in cs.CL

Abstract: Reading comprehension (RC) is a challenging task that requires synthesis of information across sentences and multiple turns of reasoning. Using a state-of-the-art RC model, we empirically investigate the performance of single-turn and multiple-turn reasoning on the SQuAD and MS MARCO datasets. The RC model is an end-to-end neural network with iterative attention, and uses reinforcement learning to dynamically control the number of turns. We find that multiple-turn reasoning outperforms single-turn reasoning for all question and answer types; further, we observe that enabling a flexible number of turns generally improves upon a fixed multiple-turn strategy. %across all question types, and is particularly beneficial to questions with lengthy, descriptive answers. We achieve results competitive to the state-of-the-art on these two datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yelong Shen (83 papers)
  2. Xiaodong Liu (162 papers)
  3. Kevin Duh (65 papers)
  4. Jianfeng Gao (344 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.