Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Small Changes Make Big Differences: Improving Multi-turn Response Selection in Dialogue Systems via Fine-Grained Contrastive Learning (2111.10154v2)

Published 19 Nov 2021 in cs.CL and cs.IR

Abstract: Retrieve-based dialogue response selection aims to find a proper response from a candidate set given a multi-turn context. Pre-trained LLMs (PLMs) based methods have yielded significant improvements on this task. The sequence representation plays a key role in the learning of matching degree between the dialogue context and the response. However, we observe that different context-response pairs sharing the same context always have a greater similarity in the sequence representations calculated by PLMs, which makes it hard to distinguish positive responses from negative ones. Motivated by this, we propose a novel \textbf{F}ine-\textbf{G}rained \textbf{C}ontrastive (FGC) learning method for the response selection task based on PLMs. This FGC learning strategy helps PLMs to generate more distinguishable matching representations of each dialogue at fine grains, and further make better predictions on choosing positive responses. Empirical studies on two benchmark datasets demonstrate that the proposed FGC learning method can generally and significantly improve the model performance of existing PLM-based matching models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuntao Li (19 papers)
  2. Can Xu (98 papers)
  3. Huang Hu (18 papers)
  4. Lei Sha (34 papers)
  5. Yan Zhang (954 papers)
  6. Daxin Jiang (138 papers)
Citations (11)