Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-turn Response Selection using Dialogue Dependency Relations (2010.01502v3)

Published 4 Oct 2020 in cs.CL

Abstract: Multi-turn response selection is a task designed for developing dialogue agents. The performance on this task has a remarkable improvement with pre-trained LLMs. However, these models simply concatenate the turns in dialogue history as the input and largely ignore the dependencies between the turns. In this paper, we propose a dialogue extraction algorithm to transform a dialogue history into threads based on their dependency relations. Each thread can be regarded as a self-contained sub-dialogue. We also propose Thread-Encoder model to encode threads and candidates into compact representations by pre-trained Transformers and finally get the matching score through an attention layer. The experiments show that dependency relations are helpful for dialogue context understanding, and our model outperforms the state-of-the-art baselines on both DSTC7 and DSTC8*, with competitive results on UbuntuV2.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus. In LREC.
  2. Nicholas Asher and Alex Lascarides. 2005. Logics of Conversation. Studies in natural language processing. Cambridge University Press.
  3. Qian Chen and Wen Wang. 2019. Sequential attention-based network for noetic end-to-end response selection. CoRR, abs/1901.02609.
  4. Enhanced LSTM for natural language inference. In ACL, pages 1657–1668.
  5. Semantically conditioned dialog response generation via hierarchical disentangled self-attention. In ACL, pages 3696–3709.
  6. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186.
  7. Jianxiong Dong and Jim Huang. 2018. Enhance word representation for out-of-vocabulary on ubuntu dialogue corpus. CoRR, abs/1802.02614.
  8. Discovering conversational dependencies between messages in dialogs. In AAAI, pages 4917–4918.
  9. Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. CoRR, abs/1907.01669.
  10. Pre-trained and attention-based neural networks for building noetic task-oriented dialogue systems. CoRR, abs/2004.01940.
  11. Interactive matching network for multi-turn response selection in retrieval-based chatbots. In CIKM 2019, pages 2321–2324.
  12. Dstc7 task 1: Noetic end-to-end response selection. In Proceedings of the First Workshop on NLP for Conversational AI, pages 60–67.
  13. A repository of conversational datasets. CoRR, abs/1904.06472.
  14. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. CoRR abs/1905.01969, 2:2–2.
  15. Seokhwan Kim. 2019. Dynamic memory networks for dialogue topic tracking.
  16. The eighth dialog system technology challenge. CoRR, abs/1911.06394.
  17. A large-scale corpus for conversation disentanglement. In ACL, pages 3846–3856.
  18. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In SIGDIAL, pages 285–294.
  19. Training end-to-end dialogue systems with the ubuntu dialogue corpus. D&D, 8(1):31–65.
  20. Training millions of personalized dialogue agents. In EMNLP, pages 2775–2779.
  21. Topic modeling with wasserstein autoencoders. In ACL, pages 6345–6381.
  22. Improving language understanding by generative pre-training.
  23. Scalable and accurate dialogue state tracking via hierarchical sequence generation. In EMNLP-IJCNLP, pages 1876–1885.
  24. Zhouxing Shi and Minlie Huang. 2019. A deep sequential model for discourse parsing on multi-party dialogues. In AAAI, pages 7007–7014.
  25. Context-aware conversation thread detection in multi-party chat. In EMNLP-IJCNLP, pages 6455–6460.
  26. One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues. In ACL, pages 1–11.
  27. Jesse Vig and Kalai Ramea. 2019. Comparison of transfer-learning approaches for response selection in multi-turn conversations. In Workshop on DSTC7.
  28. Multi-turn response selection in retrieval-based chatbots with iterated attentive convolution matching network. In CIKM, pages 1081–1090.
  29. Domain adaptive training BERT for response selection. CoRR, abs/1908.04812.
  30. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In ACL, pages 496–505.
  31. Learning to respond with deep neural networks for retrieval-based human-computer conversation system. In SIGIR, pages 55–64.
  32. Dialogpt: Large-scale generative pre-training for conversational response generation. CoRR, abs/1911.00536.
  33. Modeling multi-turn conversation with deep utterance aggregation. In COLING, pages 3740–3752.
  34. Multi-turn response selection for chatbots with deep attention matching network. In ACL, pages 1118–1127.
  35. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV, pages 19–27.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qi Jia (42 papers)
  2. Yizhu Liu (9 papers)
  3. Siyu Ren (24 papers)
  4. Kenny Q. Zhu (50 papers)
  5. Haifeng Tang (20 papers)
Citations (42)