Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Advancing the State of the Art in Open Domain Dialog Systems through the Alexa Prize (1812.10757v1)

Published 27 Dec 2018 in cs.CL and cs.AI

Abstract: Building open domain conversational systems that allow users to have engaging conversations on topics of their choice is a challenging task. Alexa Prize was launched in 2016 to tackle the problem of achieving natural, sustained, coherent and engaging open-domain dialogs. In the second iteration of the competition in 2018, university teams advanced the state of the art by using context in dialog models, leveraging knowledge graphs for language understanding, handling complex utterances, building statistical and hierarchical dialog managers, and leveraging model-driven signals from user responses. The 2018 competition also included the provision of a suite of tools and models to the competitors including the CoBot (conversational bot) toolkit, topic and dialog act detection models, conversation evaluators, and a sensitive content detection model so that the competing teams could focus on building knowledge-rich, coherent and engaging multi-turn dialog systems. This paper outlines the advances developed by the university teams as well as the Alexa Prize team to achieve the common goal of advancing the science of Conversational AI. We address several key open-ended problems such as conversational speech recognition, open domain natural language understanding, commonsense reasoning, statistical dialog management, and dialog evaluation. These collaborative efforts have driven improved experiences by Alexa users to an average rating of 3.61, the median duration of 2 mins 18 seconds, and average turns to 14.6, increases of 14%, 92%, 54% respectively since the launch of the 2018 competition. For conversational speech recognition, we have improved our relative Word Error Rate by 55% and our relative Entity Error Rate by 34% since the launch of the Alexa Prize. Socialbots improved in quality significantly more rapidly in 2018, in part due to the release of the CoBot toolkit.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (22)
  1. Chandra Khatri (20 papers)
  2. Behnam Hedayatnia (27 papers)
  3. Anu Venkatesh (10 papers)
  4. Jeff Nunn (2 papers)
  5. Yi Pan (79 papers)
  6. Qing Liu (196 papers)
  7. Han Song (7 papers)
  8. Anna Gottardi (5 papers)
  9. Sanjeev Kwatra (3 papers)
  10. Sanju Pancholi (1 paper)
  11. Ming Cheng (69 papers)
  12. Qinglang Chen (1 paper)
  13. Lauren Stubel (1 paper)
  14. Karthik Gopalakrishnan (34 papers)
  15. Kate Bland (5 papers)
  16. Raefer Gabriel (10 papers)
  17. Arindam Mandal (26 papers)
  18. Gene Hwang (2 papers)
  19. Nate Michel (1 paper)
  20. Eric King (2 papers)
Citations (83)