Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 162 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 425 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Enhancing Task-Oriented Dialogues with Chitchat: a Comparative Study Based on Lexical Diversity and Divergence (2311.14067v2)

Published 23 Nov 2023 in cs.CL

Abstract: As a recent development, task-oriented dialogues (TODs) have been enriched with chitchat in an effort to make dialogues more diverse and engaging. This enhancement is particularly valuable as TODs are often confined to narrow domains, making the mitigation of repetitive and predictable responses a significant challenge. This paper presents a comparative analysis of three chitchat enhancements, aiming to identify the most effective approach in terms of diversity. Additionally, we quantify the divergence between the added chitchat, the original task-oriented language, and chitchat typically found in chitchat datasets, highlighting the top 20 divergent keywords for each comparison. Our findings drive a discussion on future enhancements for augmenting TODs, emphasizing the importance of grounding dialogues beyond the task to achieve more diverse and natural exchanges.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. “Taskmaster-1: Toward a realistic and diverse dialog dataset,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, Nov. 2019, pp. 4516–4525, Association for Computational Linguistics.
  2. “Evaluating the state-of-the-art of end-to-end natural language generation: The e2e nlg challenge,” Computer Speech & Language, vol. 59, pp. 123–156, 2020.
  3. “Shades of BLEU, flavours of success: The case of MultiWOZ,” in Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), Online, Aug. 2021, pp. 34–46, Association for Computational Linguistics.
  4. “Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators,” in Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, Melbourne, Australia, July 2018, pp. 180–190, Association for Computational Linguistics.
  5. “Sequence-to-Sequence Models for Data-to-Text Natural Language Generation: Word- vs. Character-based Processing and Output Diversity,” in Proceedings of the 11th International Conference on Natural Language Generation, Tilburg University, The Netherlands, Nov. 2018, pp. 221–232, Association for Computational Linguistics.
  6. Claude Elwood Shannon, “A mathematical theory of communication,” The Bell system technical journal, vol. 27, no. 3, pp. 379–423, 1948.
  7. Foundations of statistical natural language processing, MIT press, 1999.
  8. “MultiWOZ - a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, Oct.-Nov. 2018, pp. 5016–5026, Association for Computational Linguistics.
  9. “Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, pp. 8689–8696.
  10. “Adding chit-chat to enhance task-oriented dialogues,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, June 2021, pp. 1570–1583, Association for Computational Linguistics.
  11. “Fusing task-oriented and open-domain dialogues in conversational agents,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2022, vol. 36, pp. 11622–11629.
  12. “KETOD: Knowledge-enriched task-oriented dialogue,” in Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, United States, July 2022, pp. 2581–2593, Association for Computational Linguistics.
  13. E. Burroughs, “Lexical diversity in listeners’ judgments of children,” Percept Mot Skills, vol. 73, no. 1, pp. 19–22, Aug 1991.
  14. “Diversifying dialogue generation with non-conversational text,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, July 2020, pp. 7087–7097, Association for Computational Linguistics.
  15. “Open-domain conversational agents: Current progress, open problems, and future directions,” arXiv preprint arXiv:2006.12442, 2020.
  16. “Can you put it all together: Evaluating conversational agents’ ability to blend skills,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, July 2020, pp. 2021–2030, Association for Computational Linguistics.
  17. “Personalizing dialogue agents: I have a dog, do you have pets too?,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, July 2018, pp. 2204–2213, Association for Computational Linguistics.
  18. “Wizard of wikipedia: Knowledge-powered conversational agents,” in International Conference on Learning Representations, 2019.
  19. “Towards empathetic open-domain conversation models: A new benchmark and dataset,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019, pp. 5370–5381, Association for Computational Linguistics.
  20. “Lexical richness and text length: an entropy-based perspective,” Journal of Quantitative Linguistics, vol. 29, no. 1, pp. 62–79, 2022.
  21. Jianhua Lin, “Divergence measures based on the shannon entropy,” IEEE Transactions on Information theory, vol. 37, no. 1, pp. 145–151, 1991.
  22. “On information and sufficiency,” The annals of mathematical statistics, vol. 22, no. 1, pp. 79–86, 1951.
  23. “Diverging divergences: Examining variants of Jensen Shannon divergence for corpus comparison tasks,” in Proceedings of the Twelfth Language Resources and Evaluation Conference, Marseille, France, May 2020, pp. 6740–6744, European Language Resources Association.
  24. “Characterizing the google books corpus: Strong limits to inferences of socio-cultural and linguistic evolution,” PloS one, vol. 10, no. 10, pp. e0137041, 2015.
  25. “Measuring conversational uptake: A case study on student-teacher interactions,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, Aug. 2021, pp. 1638–1653, Association for Computational Linguistics.
  26. “Internet-augmented dialogue generation,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, May 2022, pp. 8460–8478, Association for Computational Linguistics.
  27. “A simple language model for task-oriented dialogue,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, Eds. 2020, vol. 33, pp. 20179–20191, Curran Associates, Inc.
  28. “Soloist: Building task bots at scale with transfer learning and machine teaching,” Transactions of the Association for Computational Linguistics, vol. 9, pp. 807–824, 2021.
  29. “MinTL: Minimalist transfer learning for task-oriented dialogue systems,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, Nov. 2020, pp. 3391–3405, Association for Computational Linguistics.
  30. Teaching the spoken language, vol. 2, Cambridge university press, 1983.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.