Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Dialogue Representations from Consecutive Utterances (2205.13568v2)

Published 26 May 2022 in cs.CL and cs.LG

Abstract: Learning high-quality dialogue representations is essential for solving a variety of dialogue-oriented tasks, especially considering that dialogue systems often suffer from data scarcity. In this paper, we introduce Dialogue Sentence Embedding (DSE), a self-supervised contrastive learning method that learns effective dialogue representations suitable for a wide range of dialogue tasks. DSE learns from dialogues by taking consecutive utterances of the same dialogue as positive pairs for contrastive learning. Despite its simplicity, DSE achieves significantly better representation capability than other dialogue representation and universal sentence representation models. We evaluate DSE on five downstream dialogue tasks that examine dialogue representation at different semantic granularities. Experiments in few-shot and zero-shot settings show that DSE outperforms baselines by a large margin. For example, it achieves 13% average performance improvement over the strongest unsupervised baseline in 1-shot intent classification on 6 datasets. We also provide analyses on the benefits and limitations of our model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhihan Zhou (17 papers)
  2. Dejiao Zhang (20 papers)
  3. Wei Xiao (100 papers)
  4. Nicholas Dingwall (3 papers)
  5. Xiaofei Ma (31 papers)
  6. Andrew O. Arnold (9 papers)
  7. Bing Xiang (74 papers)
Citations (20)