Personalizing Interfaces to Humans with User-Friendly Priors (2403.07192v2)
Abstract: Robots often need to convey information to human users. For example, robots can leverage visual, auditory, and haptic interfaces to display their intent or express their internal state. In some scenarios there are socially agreed upon conventions for what these signals mean: e.g., a red light indicates an autonomous car is slowing down. But as robots develop new capabilities and seek to convey more complex data, the meaning behind their signals is not always mutually understood: one user might think a flashing light indicates the autonomous car is an aggressive driver, while another user might think the same signal means the autonomous car is defensive. In this paper we enable robots to adapt their interfaces to the current user so that the human's personalized interpretation is aligned with the robot's meaning. We start with an information theoretic end-to-end approach, which automatically tunes the interface policy to optimize the correlation between human and robot. But to ensure that this learning policy is intuitive -- and to accelerate how quickly the interface adapts to the human -- we recognize that humans have priors over how interfaces should function. For instance, humans expect interface signals to be proportional and convex. Our approach biases the robot's interface towards these priors, resulting in signals that are adapted to the current user while still following social expectations. Our simulations and user study results across $15$ participants suggest that these priors improve robot-to-human communication. See videos here: https://youtu.be/Re3OLg57hp8
- S. Reddy, S. Levine, and A. D. Dragan, “First contact: Unsupervised human-machine co-adaptation via mutual information maximization,” in Advances in Neural Information Processing Systems, 2022.
- T. Kaupp, A. Makarenko, and H. Durrant-Whyte, “Human–robot communication for collaborative decision making — A probabilistic approach,” Robotics and Autonomous Systems, 2010.
- A. D. Dragan, K. C. Lee, and S. S. Srinivasa, “Legibility and predictability of robot motion,” in ACM/IEEE International Conference on Human-Robot Interaction, 2013, pp. 301–308.
- S. H. Huang, D. Held, P. Abbeel, and A. D. Dragan, “Enabling robots to communicate their objectives,” Autonomous Robots, 2019.
- D. S. Brown and S. Niekum, “Machine teaching for inverse reinforcement learning: Algorithms and applications,” in AAAI, 2019.
- M. S. Lee, H. Admoni, and R. Simmons, “Machine teaching for human inverse reinforcement learning,” Frontiers in Robotics and AI, 2021.
- M. Li, D. P. Losey, J. Bohg, and D. Sadigh, “Learning user-preferred mappings for intuitive robot control,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020, pp. 10 960–10 967.
- B. A. Christie and D. P. Losey, “LIMIT: Learning interfaces to maximize information transfer,” arXiv:2304.08539, 2023.
- A. Simorov, R. S. Otte, C. M. Kopietz, and D. Oleynikov, “Review of surgical robotics user interface: What is the best way to control robotic surgery?” Surgical Endoscopy, vol. 26, pp. 2117–2125, 2012.
- E. Rozeboom, J. Ruiter, M. Franken, and I. Broeders, “Intuitive user interfaces increase efficiency in endoscope tip control,” Surgical Endoscopy, vol. 28, pp. 2600–2605, 2014.
- V. Villani, F. Pini, F. Leali, and C. Secchi, “Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications,” Mechatronics, vol. 55, pp. 248–266, 2018.
- E. Cha, Y. Kim, T. Fong, and M. J. Mataric, “A survey of nonverbal signaling methods for non-humanoid robots,” Foundations and Trends in Robotics, vol. 6, no. 4, pp. 211–323, 2018.
- T. Weng, L. Perlmutter, S. Nikolaidis, S. Srinivasa, and M. Cakmak, “Robot object referencing through legible situated projections,” in International Conference on Robotics and Automation, 2019.
- R. S. Andersen, O. Madsen, T. B. Moeslund, and H. B. Amor, “Projecting robot intentions into human environments,” in IEEE International Symposium on Robot and Human Interactive Communication, 2016.
- M. Walker, H. Hedayati, J. Lee, and D. Szafir, “Communicating robot motion intent with augmented reality,” in ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 316–324.
- J. F. Mullen, J. Mosier, S. Chakrabarti, A. Chen, T. White, and D. P. Losey, “Communicating inferred goals with passive augmented reality and active haptic feedback,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 8522–8529, 2021.
- T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, and F. Tanaka, “Social robots for education: A review,” Science Robotics, vol. 3, no. 21, p. eaat5954, 2018.
- N. Gasteiger, M. Hellou, and H. S. Ahn, “Factors for personalization and localization to optimize human–robot interaction: A literature review,” International Journal of Social Robotics, 2021.
- N. Dunkelberger, J. L. Sullivan, J. Bradley, I. Manickam, G. Dasarathy, R. Baraniuk, and M. K. O’Malley, “A multisensory approach to present phonemes as language through a wearable haptic device,” IEEE Transactions on Haptics, vol. 14, no. 1, pp. 188–199, 2020.
- A. A. Valdivia, S. Habibian, C. A. Mendenhall, F. Fuentes, R. Shailly, D. P. Losey, and L. H. Blumenschein, “Wrapping haptic displays around robot arms to communicate learning,” IEEE Transactions on Haptics, vol. 16, no. 1, pp. 57–72, 2023.
- V. S. Ramachandran and E. M. Hubbard, “Synaesthesia–A window into perception, thought and language,” Journal of Consciousness Studies, vol. 8, no. 12, pp. 3–34, 2001.
- A. Ćwiek, S. Fuchs, C. Draxler, E. L. Asu, D. Dediu, K. Hiovain, S. Kawahara, S. Koutalidis, M. Krifka, P. Lippus, et al., “The bouba/kiki effect is robust across cultures and writing systems,” Philosophical Transactions of the Royal Society B, 2022.
- J. Song and S. Ermon, “Understanding the limitations of variational mutual information estimators,” in International Conference on Learning Representations, 2019.
- F. Nogueira, “Bayesian Optimization: Open source constrained global optimization tool for Python,” 2014–. [Online]. Available: https://github.com/fmfn/BayesianOptimization