Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Do Human Users Teach a Continual Learning Robot in Repeated Interactions? (2307.00123v1)

Published 30 Jun 2023 in cs.RO, cs.HC, and cs.LG

Abstract: Continual learning (CL) has emerged as an important avenue of research in recent years, at the intersection of Machine Learning (ML) and Human-Robot Interaction (HRI), to allow robots to continually learn in their environments over long-term interactions with humans. Most research in continual learning, however, has been robot-centered to develop continual learning algorithms that can quickly learn new information on static datasets. In this paper, we take a human-centered approach to continual learning, to understand how humans teach continual learning robots over the long term and if there are variations in their teaching styles. We conducted an in-person study with 40 participants that interacted with a continual learning robot in 200 sessions. In this between-participant study, we used two different CL models deployed on a Fetch mobile manipulator robot. An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users indicating the need for personalized adaptation to their distinct teaching styles. The results also show that although there is a difference in the teaching styles between expert and non-expert users, the style does not have an effect on the performance of the continual learning robot. Finally, our analysis shows that the constrained experimental setups that have been widely used to test most continual learning techniques are not adequate, as real users interact with and teach continual learning robots in a variety of ways. Our code is available at https://github.com/aliayub7/cl_hri.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. J. Saunders, D. S. Syrdal, K. L. Koay, N. Burke, and K. Dautenhahn, ““Teach Me–Show Me”—End-user personalization of a smart home and companion robot,” IEEE Transactions on Human-Machine Systems, vol. 46, no. 1, pp. 27–40, 2016.
  2. M. Dehghan, Z. Zhang, M. Siam, J. Jin, L. Petrich, and M. Jagersand, “Online object and task learning via human robot interaction,” in 2019 International Conference on Robotics and Automation (ICRA), May 2019, pp. 2132–2138.
  3. A. Ayub and A. R. Wagner, “Tell me what this is: Few-shot incremental object learning by a robot,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
  4. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “iCaRL: Incremental classifier and representation learning,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  5. T. L. Hayes, K. Kafle, R. Shrestha, M. Acharya, and C. Kanan, “Remind your neural network to prevent catastrophic forgetting,” in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds.   Cham: Springer International Publishing, 2020, pp. 466–483.
  6. A. Ayub and C. Fendley, “Few-shot continual active learning by a robot,” in Advances in Neural Information Processing Systems, A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, Eds., 2022.
  7. X. Tao, X. Hong, X. Chang, S. Dong, X. Wei, and Y. Gong, “Few-shot class-incremental learning,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  8. C. Zhang, N. Song, G. Lin, Y. Zheng, P. Pan, and Y. Xu, “Few-shot incremental learning with continually evolved classifiers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 12 455–12 464.
  9. M. Wise, M. Ferguson, D. King, E. Diehr, and D. Dymesich, “Fetch and freight: Standard platforms for service robot applications,” in IJCAI, Workshop on Autonomous Mobile Service Robots, 2016.
  10. R. M. French, “Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic interference,” Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, pp. 335–340, 2019.
  11. Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, and Y. Fu, “Large scale incremental learning,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  12. J. Kirkpatrick, R. Pascanu, N. C. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences of the United States of America, vol. 114, no. 13, pp. 3521–3526, 2017.
  13. Z. Li and D. Hoiem, “Learning without forgetting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp. 2935–2947, Dec 2018.
  14. A. Ayub and A. Wagner, “EEC: Learning to encode and regenerate images for continual learning,” in International Conference on Learning Representations (ICLR), 2021.
  15. O. Ostapenko, M. Puscas, T. Klein, P. Jahnichen, and M. Nabi, “Learning to remember: A synaptic plasticity driven framework for continual learning,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019, pp. 11 321–11 329.
  16. X. Tao, X. Chang, X. Hong, X. Wei, and Y. Gong, “Topology-preserving class-incremental learning,” in Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIX.   Springer-Verlag, 2020, p. 254–270.
  17. P. Ramaraj, C. L. Ortiz, and S. Mohan, “Unpacking human teachers’ intentions for natural interactive task learning,” in 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN).   IEEE, 2021, pp. 1173–1180.
  18. T. Kaochar, R. T. Peralta, C. T. Morrison, I. R. Fasel, T. J. Walsh, and P. R. Cohen, “Towards understanding how humans teach robots,” in User Modeling, Adaption and Personalization: 19th International Conference, UMAP 2011, Girona, Spain, July 11-15, 2011. Proceedings 19.   Springer, 2011, pp. 347–352.
  19. A. Bajcsy, D. P. Losey, M. K. O’Malley, and A. D. Dragan, “Learning from physical human corrections, one feature at a time,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 141–149.
  20. E. Senft, P. Baxter, J. Kennedy, S. Lemaignan, and T. Belpaeme, “Supervised autonomy for online learning in human-robot interaction,” Pattern Recognition Letters, vol. 99, pp. 77–86, 2017.
  21. R. Krishna, D. Lee, L. Fei-Fei, and M. S. Bernstein, “Socially situated artificial intelligence enables learning from human interaction,” Proceedings of the National Academy of Sciences, vol. 119, no. 39, p. e2115730119, 2022.
  22. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  23. D. Lakens, “Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and anovas,” Frontiers in Psychology, vol. 4, 2013.
  24. F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics Bulletin, vol. 1, no. 6, pp. 80–83, 1945. [Online]. Available: http://www.jstor.org/stable/3001968
  25. Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: a practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society: series B (Methodological), vol. 57, no. 1, pp. 289–300, 1995.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ali Ayub (22 papers)
  2. Jainish Mehta (1 paper)
  3. Zachary De Francesco (2 papers)
  4. Patrick Holthaus (13 papers)
  5. Kerstin Dautenhahn (25 papers)
  6. Chrystopher L. Nehaniv (25 papers)
Citations (2)