Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Topic-based Evaluation for Conversational Bots (1801.03622v1)

Published 11 Jan 2018 in cs.CL, cs.AI, cs.CY, cs.HC, and cs.MA

Abstract: Dialog evaluation is a challenging problem, especially for non task-oriented dialogs where conversational success is not well-defined. We propose to evaluate dialog quality using topic-based metrics that describe the ability of a conversational bot to sustain coherent and engaging conversations on a topic, and the diversity of topics that a bot can handle. To detect conversation topics per utterance, we adopt Deep Average Networks (DAN) and train a topic classifier on a variety of question and query data categorized into multiple topics. We propose a novel extension to DAN by adding a topic-word attention table that allows the system to jointly capture topic keywords in an utterance and perform topic classification. We compare our proposed topic based metrics with the ratings provided by users and show that our metrics both correlate with and complement human judgment. Our analysis is performed on tens of thousands of real human-bot dialogs from the Alexa Prize competition and highlights user expectations for conversational bots.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fenfei Guo (5 papers)
  2. Angeliki Metallinou (14 papers)
  3. Chandra Khatri (20 papers)
  4. Anirudh Raju (20 papers)
  5. Anu Venkatesh (10 papers)
  6. Ashwin Ram (9 papers)
Citations (51)