Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Robot Minds: Leveraging Machine Teaching for Transparent Human-Robot Collaboration Across Diverse Groups (2404.15472v1)

Published 23 Apr 2024 in cs.RO

Abstract: In this work, we aim to improve transparency and efficacy in human-robot collaboration by developing machine teaching algorithms suitable for groups with varied learning capabilities. While previous approaches focused on tailored approaches for teaching individuals, our method teaches teams with various compositions of diverse learners using team belief representations to address personalization challenges within groups. We investigate various group teaching strategies, such as focusing on individual beliefs or the group's collective beliefs, and assess their impact on learning robot policies for different team compositions. Our findings reveal that team belief strategies yield less variation in learning duration and better accommodate diverse teams compared to individual belief strategies, suggesting their suitability in mixed-proficiency settings with limited resources. Conversely, individual belief strategies provide a more uniform knowledge level, particularly effective for homogeneously inexperienced groups. Our study indicates that the teaching strategy's efficacy is significantly influenced by team composition and learner proficiency, highlighting the importance of real-time assessment of learner proficiency and adapting teaching approaches based on learner proficiency for optimal teaching outcomes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. J. Schneider and J. Handali, “Personalized explanation in machine learning: A conceptualization,” arXiv preprint arXiv:1901.00770, 2019.
  2. Q. V. Liao, D. Gruen, and S. Miller, “Questioning the ai: informing design practices for explainable ai user experiences,” in Proceedings of the 2020 CHI conference on human factors in computing systems, 2020, pp. 1–15.
  3. A. Silva, P. Tambwekar, M. Schrum, and M. Gombolay, “Towards balancing preference and performance through adaptive personalized explainability,” in Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, pp. 658–668.
  4. J. Jara-Ettinger, “Theory of mind as inverse reinforcement learning,” Current Opinion in Behavioral Sciences, vol. 29, pp. 105–110, 2019.
  5. S. H. Huang, D. Held, P. Abbeel, and A. D. Dragan, “Enabling robots to communicate their objectives,” Autonomous Robots, 2019.
  6. X. Zhu, “Machine teaching: An inverse problem to machine learning and an approach toward optimal education,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29, no. 1, 2015.
  7. X. Zhu, J. Liu, and M. Lopes, “No learner left behind: On the complexity of teaching multiple learners simultaneously.” in IJCAI, 2017, pp. 3588–3594.
  8. T. Yeo, P. Kamalaruban, A. Singla, A. Merchant, T. Asselborn, L. Faucon, P. Dillenbourg, and V. Cevher, “Iterative classroom teaching,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019, pp. 5684–5692.
  9. F. S. Melo and M. Lopes, “Teaching multiple inverse reinforcement learners,” Frontiers in Artificial Intelligence, vol. 4, p. 625183, 2021.
  10. P. Kamalaruban, R. Devidze, V. Cevher, and A. Singla, “Interactive teaching algorithms for inverse reinforcement learning,” arXiv preprint arXiv:1905.11867, 2019.
  11. M. S. Lee, H. Admoni, and R. Simmons, “Closed-loop reasoning about counterfactuals to improve policy transparency,” in International Conference o Machine Learning (ICML) Workshop on Counterfactuals in Minds and Machines, 2023.
  12. J. M. Carpenter, “Effective teaching methods for large classes,” Journal of Family & Consumer Sciences Education, vol. 24, no. 2, 2006.
  13. P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in ICML, 2004.
  14. M. S. Lee, H. Admoni, and R. Simmons, “Reasoning about counterfactuals to improve human inverse reinforcement learning,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 9140–9147.
  15. D. S. Brown and S. Niekum, “Machine teaching for inverse reinforcement learning: Algorithms and applications,” in AAAI, 2019.
  16. K. R. Koedinger, A. T. Corbett, and C. Perfetti, “The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning,” Cognitive science, 2012.
  17. N. J. Cooke, E. Salas, J. A. Cannon-Bowers, and R. J. Stout, “Measuring team knowledge,” Human factors, vol. 42, no. 1, pp. 151–173, 2000.
  18. C. Riedl, Y. J. Kim, P. Gupta, T. W. Malone, and A. W. Woolley, “Quantifying collective intelligence in human groups,” Proceedings of the National Academy of Sciences, vol. 118, no. 21, p. e2005737118, 2021.
  19. T. Li, S. Sun, T. P. Sattar, and J. M. Corchado, “Fight sample degeneracy and impoverishment in particle filters: A review of intelligent approaches,” Expert Systems with applications, 2014.
  20. S. K. Jayaraman, A. Steinfeld, H. Admoni, and R. Simmons, “Adaptive group machine teaching for human group inverse reinforcement learning,” 2023.
  21. L. K. Fazio, B. J. Huelser, A. Johnson, and E. J. Marsh, “Receiving right/wrong feedback: Consequences for learning,” Memory, vol. 18, no. 3, pp. 335–350, 2010.

Summary

We haven't generated a summary for this paper yet.