Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probing Task-Oriented Dialogue Representation from Language Models (2010.13912v1)

Published 26 Oct 2020 in cs.CL and cs.AI

Abstract: This paper investigates pre-trained LLMs to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks. We approach the problem from two aspects: supervised classifier probe and unsupervised mutual information probe. We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained LLM with annotated labels in a supervised way. Meanwhile, we propose an unsupervised mutual information probe to evaluate the mutual dependence between a real clustering and a representation clustering. The goals of this empirical paper are to 1) investigate probing techniques, especially from the unsupervised mutual information aspect, 2) provide guidelines of pre-trained LLM selection for the dialogue research community, 3) find insights of pre-training factors for dialogue application that may be the key to success.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Chien-Sheng Wu (77 papers)
  2. Caiming Xiong (337 papers)
Citations (19)