Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Development of a Trust-Aware User Simulator for Statistical Proactive Dialog Modeling in Human-AI Teams (2304.11913v2)

Published 24 Apr 2023 in cs.AI and cs.CL

Abstract: The concept of a Human-AI team has gained increasing attention in recent years. For effective collaboration between humans and AI teammates, proactivity is crucial for close coordination and effective communication. However, the design of adequate proactivity for AI-based systems to support humans is still an open question and a challenging topic. In this paper, we present the development of a corpus-based user simulator for training and testing proactive dialog policies. The simulator incorporates informed knowledge about proactive dialog and its effect on user trust and simulates user behavior and personal information, including socio-demographic features and personality traits. Two different simulation approaches were compared, and a task-step-based approach yielded better overall results due to enhanced modeling of sequential dependencies. This research presents a promising avenue for exploring and evaluating appropriate proactive strategies in a dialog game setting for improving Human-AI teams.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Vevake Balaraman and Bernardo Magnini. 2020. Proactive Systems and Influenceable Users: Simulating Proactivity in Task-oriented Dialogues. In Proceedings of the 24th Workshop on the Semantics and Pragmatics of Dialogue-Full Papers, Virually at Brandeis, Waltham, New Jersey, July. SEMDIAL.
  2. MultiWOZ–A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling. arXiv preprint arXiv:1810.00278 (2018).
  3. J Michael Crant. 2000. Proactive behavior in organizations. Journal of management 26, 3 (2000), 435–462.
  4. User modeling for spoken dialogue system evaluation. In 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings. IEEE, 80–87.
  5. Generic personality and emotion simulation for conversational agents. Computer animation and virtual worlds 15, 1 (2004), 1–13.
  6. Emmanuel Ferreira and Fabrice Lefevre. 2015. Reinforcement-learning based dialogue system for human–robot interactions with socially-inspired rewards. Computer Speech & Language 34, 1 (2015), 256–274.
  7. Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407–434.
  8. Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 159–166.
  9. A user simulator architecture for socially-aware conversational agents. In Proceedings of the 18th International Conference on Intelligent Virtual Agents. 133–140.
  10. Technikaffinität erfassen–der Fragebogen TA-EG. Der Mensch im Mittelpunkt technischer Systeme 8 (2009), 196–201.
  11. ”Was That Successful?” On Integrating Proactive Meta-Dialogue in a DIY-Assistant Using Multimodal Cues. In Proceedings of the 2020 International Conference on Multimodal Interaction (Virtual Event, Netherlands) (ICMI ’20). Association for Computing Machinery, New York, NY, USA, 585–594. https://doi.org/10.1145/3382507.3418818
  12. The Role of Trust in Proactive Conversational Assistants. IEEE Access 9 (2021), 112821–112836.
  13. Effects of Proactive Dialogue Strategies on Human-Computer Trust. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (Genoa, Italy) (UMAP ’20). Association for Computing Machinery, New York, NY, USA, 107–116. https://doi.org/10.1145/3340631.3394840
  14. Modelling and Predicting Trust for Developing Proactive Dialogue Strategies in Mixed-Initiative Interaction. In Proceedings of the 2021 International Conference on Multimodal Interaction. 131–140.
  15. ProDial – An Annotated Proactive Dialogue Act Corpus for Conversational Assistants using Crowdsourcing. In Proceedings of the Thirteenth Language Resources and Evaluation Conference. European Language Resources Association, Marseille, France, 3164–3173. https://aclanthology.org/2022.lrec-1.339
  16. KURT: A Household Assistance Robot Capable of Proactive Dialogue. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction. 855–859.
  17. Towards Improving Proactive Dialog Agents Using Socially-Aware Reinforcement Learning. In arXiv. arXiv. https://doi.org/10.48550/ARXIV.2211.15359
  18. Neural User Simulation for Corpus-based Policy Optimisation of Spoken Dialogue Systems. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue. 60–69.
  19. John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
  20. Sungjin Lee and Maxine Eskenazi. 2012. An unsupervised approach to user simulation: toward self-improving dialog systems. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue. 50–59.
  21. A stochastic model of human-machine interaction for learning dialog strategies. IEEE Transactions on speech and audio processing 8, 1 (2000), 11–23.
  22. Domain-independent user simulation with transformers for task-oriented dialogue systems. arXiv preprint arXiv:2106.08838 (2021).
  23. Assessment of dialogue systems by means of a new simulation technique. Speech Communication 40, 3 (2003), 387–407.
  24. Michael F McTear. 2020. Conversational AI: Dialogue Systems, Conversational Agents, and Chatbots. Synthesis Lectures on Human Language Technologies 13, 3 (2020), 1–251.
  25. I trust it, but I don’t know why: Effects of implicit attitudes toward automation on trust in an automated system. Human factors 55, 3 (2013), 520–534.
  26. Bonnie M Muir. 1994. Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37, 11 (1994), 1905–1922.
  27. Raja Parasuraman and Victor Riley. 1997. Humans and automation: Use, misuse, disuse, abuse. Human factors 39, 2 (1997), 230–253.
  28. Florian Pecune and Stacy Marsella. 2020. A framework to co-optimize task and social dialogue policies using Reinforcement Learning. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. 1–8.
  29. Olivier Pietquin. 2005. A framework for unsupervised learning of dialogue strategies. Presses univ. de Louvain.
  30. Olivier Pietquin and Helen Hastie. 2013. A survey on metrics for the evaluation of user simulations. The knowledge engineering review 28, 1 (2013), 59–73.
  31. Eine kurze Skala zur Messung der fünf Dimensionen der Persönlichkeit: Big-Five-Inventory-10 (BFI-10). Methoden, Daten, Analysen (mda) 7, 2 (2013), 233–249.
  32. Effects of a social robot’s autonomy and group orientation on human decision-making. Advances in Human-Computer Interaction 2013 (2013), 11.
  33. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review 21, 2 (2006), 97–126.
  34. How to build user simulators to train rl-based dialog systems. arXiv preprint arXiv:1909.01388 (2019).
  35. Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system. Journal of Artificial Intelligence Research 16 (2002), 105–133.
  36. Simulating user satisfaction for the evaluation of task-oriented dialogue systems. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2499–2506.
  37. Quality-adaptive spoken dialogue initiative selection and implications on reward modelling. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. 374–383.
  38. The dialog state tracking challenge series. AI Magazine 35, 4 (2014), 121–124.
  39. Pomdp-based statistical spoken dialog systems: A review. Proc. IEEE 101, 5 (2013), 1160–1179.
  40. ” An ideal human” expectations of AI teammates in human-AI teaming. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (2021), 1–25.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Matthias Kraus (9 papers)
  2. Ron Riekenbrauck (2 papers)
  3. Wolfgang Minker (18 papers)
Citations (3)