Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation (1808.08795v1)

Published 27 Aug 2018 in cs.CL

Abstract: Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models. The code is available at https://github.com/lancopku/AMM

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Liangchen Luo (15 papers)
  2. Jingjing Xu (80 papers)
  3. Junyang Lin (99 papers)
  4. Qi Zeng (42 papers)
  5. Xu Sun (194 papers)
Citations (37)