Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dialogue Response Selection with Hierarchical Curriculum Learning (2012.14756v3)

Published 29 Dec 2020 in cs.CL

Abstract: We study the learning of a matching model for dialogue response selection. Motivated by the recent finding that models trained with random negative samples are not ideal in real-world scenarios, we propose a hierarchical curriculum learning framework that trains the matching model in an "easy-to-difficult" scheme. Our learning framework consists of two complementary curricula: (1) corpus-level curriculum (CC); and (2) instance-level curriculum (IC). In CC, the model gradually increases its ability in finding the matching clues between the dialogue context and a response candidate. As for IC, it progressively strengthens the model's ability in identifying the mismatching information between the dialogue context and a response candidate. Empirical studies on three benchmark datasets with three state-of-the-art matching models demonstrate that the proposed learning framework significantly improves the model performance across various evaluation metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yixuan Su (35 papers)
  2. Deng Cai (181 papers)
  3. Qingyu Zhou (28 papers)
  4. Zibo Lin (4 papers)
  5. Simon Baker (63 papers)
  6. Yunbo Cao (43 papers)
  7. Shuming Shi (126 papers)
  8. Nigel Collier (83 papers)
  9. Yan Wang (733 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.