Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Simple but Effective Method to Incorporate Multi-turn Context with BERT for Conversational Machine Comprehension (1905.12848v1)

Published 30 May 2019 in cs.CL

Abstract: Conversational machine comprehension (CMC) requires understanding the context of multi-turn dialogue. Using BERT, a pre-training LLM, has been successful for single-turn machine comprehension, while modeling multiple turns of question answering with BERT has not been established because BERT has a limit on the number and the length of input sequences. In this paper, we propose a simple but effective method with BERT for CMC. Our method uses BERT to encode a paragraph independently conditioned with each question and each answer in a multi-turn context. Then, the method predicts an answer on the basis of the paragraph representations encoded with BERT. The experiments with representative CMC datasets, QuAC and CoQA, show that our method outperformed recently published methods (+0.8 F1 on QuAC and +2.1 F1 on CoQA). In addition, we conducted a detailed analysis of the effects of the number and types of dialogue history on the accuracy of CMC, and we found that the gold answer history, which may not be given in an actual conversation, contributed to the model performance most on both datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yasuhito Ohsugi (1 paper)
  2. Itsumi Saito (9 papers)
  3. Kyosuke Nishida (23 papers)
  4. Hisako Asano (6 papers)
  5. Junji Tomita (7 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.