Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modeling Topical Relevance for Multi-Turn Dialogue Generation (2009.12735v1)

Published 27 Sep 2020 in cs.CL, cs.HC, and cs.LG

Abstract: Topic drift is a common phenomenon in multi-turn dialogue. Therefore, an ideal dialogue generation models should be able to capture the topic information of each context, detect the relevant context, and produce appropriate responses accordingly. However, existing models usually use word or sentence level similarities to detect the relevant contexts, which fail to well capture the topical level relevance. In this paper, we propose a new model, named STAR-BTM, to tackle this problem. Firstly, the Biterm Topic Model is pre-trained on the whole training dataset. Then, the topic level attention weights are computed based on the topic representation of each context. Finally, the attention weights and the topic distribution are utilized in the decoding process to generate the corresponding responses. Experimental results on both Chinese customer services data and English Ubuntu dialogue data show that STAR-BTM significantly outperforms several state-of-the-art methods, in terms of both metric-based and human evaluations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hainan Zhang (21 papers)
  2. Yanyan Lan (87 papers)
  3. Liang Pang (94 papers)
  4. Hongshen Chen (23 papers)
  5. Zhuoye Ding (16 papers)
  6. Dawei Yin (165 papers)
Citations (51)