Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Relevance-Promoting Language Model for Short-Text Conversation (1911.11489v1)

Published 26 Nov 2019 in cs.CL

Abstract: Despite the effectiveness of sequence-to-sequence framework on the task of Short-Text Conversation (STC), the issue of under-exploitation of training data (i.e., the supervision signals from query text is \textit{ignored}) still remains unresolved. Also, the adopted \textit{maximization}-based decoding strategies, inclined to generating the generic responses or responses with repetition, are unsuited to the STC task. In this paper, we propose to formulate the STC task as a LLMing problem and tailor-make a training strategy to adapt a LLM for response generation. To enhance generation performance, we design a relevance-promoting transformer LLM, which performs additional supervised source attention after the self-attention to increase the importance of informative query tokens in calculating the token-level representation. The model further refines the query representation with relevance clues inferred from its multiple references during training. In testing, we adopt a \textit{randomization-over-maximization} strategy to reduce the generation of generic responses. Experimental results on a large Chinese STC dataset demonstrate the superiority of the proposed model on relevance metrics and diversity metrics.\footnote{Code available at https://ai.tencent.com/ailab/nlp/dialogue/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xin Li (980 papers)
  2. Piji Li (75 papers)
  3. Wei Bi (62 papers)
  4. Xiaojiang Liu (27 papers)
  5. Wai Lam (117 papers)
Citations (11)