Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-turn Response Selection with Commonsense-enhanced Language Models (2407.18479v1)

Published 26 Jul 2024 in cs.CL

Abstract: As a branch of advanced artificial intelligence, dialogue systems are prospering. Multi-turn response selection is a general research problem in dialogue systems. With the assistance of background information and pre-trained LLMs, the performance of state-of-the-art methods on this problem gains impressive improvement. However, existing studies neglect the importance of external commonsense knowledge. Hence, we design a Siamese network where a pre-trained LLM merges with a Graph neural network (SinLG). SinLG takes advantage of Pre-trained LLMs (PLMs) to catch the word correlations in the context and response candidates and utilizes a Graph Neural Network (GNN) to reason helpful common sense from an external knowledge graph. The GNN aims to assist the PLM in fine-tuning, and arousing its related memories to attain better performance. Specifically, we first extract related concepts as nodes from an external knowledge graph to construct a subgraph with the context response pair as a super node for each sample. Next, we learn two representations for the context response pair via both the PLM and GNN. A similarity loss between the two representations is utilized to transfer the commonsense knowledge from the GNN to the PLM. Then only the PLM is used to infer online so that efficiency can be guaranteed. Finally, we conduct extensive experiments on two variants of the PERSONA-CHAT dataset, which proves that our solution can not only improve the performance of the PLM but also achieve an efficient inference.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuandong Wang (20 papers)
  2. Xuhui Ren (4 papers)
  3. Tong Chen (200 papers)
  4. Yuxiao Dong (119 papers)
  5. Nguyen Quoc Viet Hung (18 papers)
  6. Jie Tang (302 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com