Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-attention Comparison Module for Boosting Performance on Retrieval-based Open-Domain Dialog Systems (2012.11357v1)

Published 21 Dec 2020 in cs.CL

Abstract: Since the pre-trained LLMs are widely used, retrieval-based open-domain dialog systems, have attracted considerable attention from researchers recently. Most of the previous works select a suitable response only according to the matching degree between the query and each individual candidate response. Although good performance has been achieved, these recent works ignore the comparison among the candidate responses, which could provide rich information for selecting the most appropriate response. Intuitively, better decisions could be made when the models can get access to the comparison information among all the candidate responses. In order to leverage the comparison information among the candidate responses, in this paper, we propose a novel and plug-in Self-attention Comparison Module for retrieval-based open-domain dialog systems, called SCM. Extensive experiment results demonstrate that our proposed self-attention comparison module effectively boosts the performance of the existing retrieval-based open-domain dialog systems. Besides, we have publicly released our source codes for future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tian Lan (162 papers)
  2. Xian-Ling Mao (76 papers)
  3. Zhipeng Zhao (16 papers)
  4. Wei Wei (424 papers)
  5. Heyan Huang (107 papers)
Citations (1)