Understanding the Application of Utility Theory in Robotics and Artificial Intelligence: A Survey (2306.09445v1)
Abstract: As a unifying concept in economics, game theory, and operations research, even in the Robotics and AI field, the utility is used to evaluate the level of individual needs, preferences, and interests. Especially for decision-making and learning in multi-agent/robot systems (MAS/MRS), a suitable utility model can guide agents in choosing reasonable strategies to achieve their current needs and learning to cooperate and organize their behaviors, optimizing the system's utility, building stable and reliable relationships, and guaranteeing each group member's sustainable development, similar to the human society. Although these systems' complex, large-scale, and long-term behaviors are strongly determined by the fundamental characteristics of the underlying relationships, there has been less discussion on the theoretical aspects of mechanisms and the fields of applications in Robotics and AI. This paper introduces a utility-orient needs paradigm to describe and evaluate inter and outer relationships among agents' interactions. Then, we survey existing literature in relevant fields to support it and propose several promising research directions along with some open problems deemed necessary for further investigations.
- S. A. Levin, “Ecosystems and the biosphere as complex adaptive systems,” Ecosystems, vol. 1, no. 5, pp. 431–436, 1998.
- M. Asada, K. Hosoda, Y. Kuniyoshi, H. Ishiguro, T. Inui, Y. Yoshikawa, M. Ogino, and C. Yoshida, “Cognitive developmental robotics: A survey,” IEEE transactions on autonomous mental development, vol. 1, no. 1, pp. 12–34, 2009.
- G. Beni and J. Wang, “Swarm intelligence in cellular robotic systems,” in Robots and biological systems: towards a new bionics? Springer, 1993, pp. 703–712.
- B. P. Gerkey and M. J. Mataric, “Multi-robot task allocation: Analyzing the complexity and optimality of key architectures,” in 2003 IEEE international conference on robotics and automation (Cat. No. 03CH37422), vol. 3. IEEE, 2003, pp. 3862–3868.
- Q. Yang, Z. Luo, W. Song, and R. Parasuraman, “Self-reactive planning of multi-robots with dynamic task assignments,” in 2019 International Symposium on Multi-Robot and Multi-Agent Systems (MRS). IEEE, 2019, pp. 89–91.
- R. H. Wortham and A. Theodorou, “Robot transparency, trust and utility,” Connection Science, vol. 29, no. 3, pp. 242–248, 2017.
- M. S. Prewett, R. C. Johnson, K. N. Saboe, L. R. Elliott, and M. D. Coovert, “Managing workload in human–robot interaction: A review of empirical studies,” Computers in Human Behavior, vol. 26, no. 5, pp. 840–856, 2010.
- P. C. Fishburn, “Utility theory for decision making,” Research analysis corp McLean VA, Tech. Rep., 1970.
- P. W. Glimcher, M. C. Dorris, and H. M. Bayer, “Physiological utility theory and the neuroeconomics of choice,” Games and economic behavior, vol. 52, no. 2, pp. 213–256, 2005.
- A. Romero, A. Prieto, F. Bellas, and R. J. Duro, “Simplifying the creation and management of utility models in continuous domains for cognitive robotics,” Neurocomputing, vol. 353, pp. 106–118, 2019.
- K. Merrick, “Value systems for developmental cognitive robotics: A survey,” Cognitive Systems Research, vol. 41, pp. 38–55, 2017.
- K. E. Merrick, “Novelty and beyond: Towards combined motivation models and integrated learning architectures,” Intrinsically motivated learning in natural and artificial systems, pp. 209–233, 2013.
- A. H. Maslow, “A theory of human motivation.” Psychological review, vol. 50, no. 4, p. 370, 1943.
- W. Banzhaf, “Self-organizing systems.” Encyclopedia of complexity and systems science, vol. 14, p. 589, 2009.
- D. Mcfarland, “Autonomy and self-sufficiency in robots,” in The Artificial Life Route to Artificial Intelligence. Routledge, 2018, pp. 187–214.
- D. McFarland and E. Spier, “Basic cycles, utility and opportunism in self-sufficient robots,” Robotics and Autonomous Systems, vol. 20, no. 2-4, pp. 179–190, 1997.
- J.-A. Meyer and S. W. Wilson, “From animals to animats,” Cambridge MA, 1991.
- D. McFarland, “Towards robot cooperation,” From animals to animats, vol. 3, pp. 440–444, 1994.
- S. Wallkötter, “The hidden potential of nao and pepper – custom robot postures in naoqi v2.4,” 2019.
- M. Begum and F. Karray, “Computational intelligence techniques in bio-inspired robotics,” in Design and control of intelligent robotic systems. Springer, 2009, pp. 1–28.
- O. Sporns and E. Körner, “Value and self-referential control: necessary ingredients for the autonomous development of flexible intelligence,” Towards a Theory of Thinking, pp. 323–335, 2010.
- O. Sporns, N. Almássy, and G. M. Edelman, “Plasticity in value systems and its role in adaptive behavior,” Adaptive Behavior, vol. 8, no. 2, pp. 129–148, 2000.
- J. L. Krichmar and G. M. Edelman, “Brain-based devices: Intelligent systems based on principles of the nervous system,” in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), vol. 1. IEEE, 2003, pp. 940–945.
- N. Almássy, G. Edelman, and O. Sporns, “Behavioral constraints in the development of neuronal properties: a cortical model embedded in a real-world device.” Cerebral cortex (New York, NY: 1991), vol. 8, no. 4, pp. 346–361, 1998.
- T. J. Prescott, F. M. M. González, K. Gurney, M. D. Humphries, and P. Redgrave, “A robot model of the basal ganglia: behavior and intrinsic processing,” Neural networks, vol. 19, no. 1, pp. 31–61, 2006.
- B. R. Cox and J. L. Krichmar, “Neuromodulation as a robot controller,” IEEE Robotics & Automation Magazine, vol. 16, no. 3, pp. 72–80, 2009.
- V. G. Fiore, V. Sperati, F. Mannella, M. Mirolli, K. Gurney, K. Friston, R. J. Dolan, and G. Baldassarre, “Keep focussing: striatal dopamine multiple functions resolved in a single mechanism tested in a simulated humanoid robot,” Frontiers in psychology, vol. 5, p. 124, 2014.
- G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.
- W. Maass, “Networks of spiking neurons: the third generation of neural network models,” Neural networks, vol. 10, no. 9, pp. 1659–1671, 1997.
- S. Grossberg, “Adaptive pattern classification and universal recoding: I. parallel development and coding of neural feature detectors,” Biological cybernetics, vol. 23, no. 3, pp. 121–134, 1976.
- H. Bourlard and Y. Kamp, “Auto-association by multilayer perceptrons and singular value decomposition,” Biological cybernetics, vol. 59, no. 4, pp. 291–294, 1988.
- K. Fukushima and S. Miyake, “Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition,” in Competition and cooperation in neural nets. Springer, 1982, pp. 267–285.
- B. Fritzke, “A growing neural gas network learns topologies,” Advances in neural information processing systems, vol. 7, 1994.
- K. E. Merrick, “A comparative study of value systems for self-motivated exploration and learning by robots,” IEEE Transactions on Autonomous Mental Development, vol. 2, no. 2, pp. 119–131, 2010.
- G. Gordon and E. Ahissar, “Hierarchical curiosity loops and active sensing,” Neural Networks, vol. 32, pp. 119–129, 2012.
- G. Martius, R. Der, and N. Ay, “Information driven self-organization of complex robotic behaviors,” PloS one, vol. 8, no. 5, p. e63400, 2013.
- G. Martius, L. Jahn, H. Hauser, and V. V. Hafner, “Self-exploration of the stumpy robot with predictive information maximization,” in International Conference on Simulation of Adaptive Behavior. Springer, 2014, pp. 32–42.
- K. Makukhin and S. Bolland, “Exploring the periphery of knowledge by intrinsically motivated systems,” in Australasian Conference on Artificial Life and Computational Intelligence. Springer, 2015, pp. 49–61.
- X. Huang and J. Weng, “Novelty and reinforcement learning in the value system of developmental robots,” in Proceedings of the 2nd international workshop on Epigenetic Robotics: Modeling cognitive development in robotic systems, vol. 74, 2002, p. 55.
- K. Doya and E. Uchibe, “The cyber rodent project: Exploration of adaptive mechanisms for self-preservation and self-reproduction,” Adaptive Behavior, vol. 13, no. 2, pp. 149–160, 2005.
- K. E. Merrick, “Modeling behavior cycles as a value system for developmental robots,” Adaptive Behavior, vol. 18, no. 3-4, pp. 237–257, 2010.
- P.-Y. Oudeyer, F. Kaplan, and V. V. Hafner, “Intrinsic motivation systems for autonomous mental development,” IEEE transactions on evolutionary computation, vol. 11, no. 2, pp. 265–286, 2007.
- M. Frank, J. Leitner, M. Stollenga, A. Förster, and J. Schmidhuber, “Curiosity driven reinforcement learning for motion planning on humanoids,” Frontiers in neurorobotics, vol. 7, p. 25, 2014.
- V. R. Kompella, M. Stollenga, M. Luciw, and J. Schmidhuber, “Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots,” Artificial Intelligence, vol. 247, pp. 313–335, 2017.
- D. Di Nocera, A. Finzi, S. Rossi, and M. Staffa, “The role of intrinsic motivations in attention allocation and shifting,” Frontiers in psychology, vol. 5, p. 273, 2014.
- Q. Yang and R. Parasuraman, “Hierarchical needs based self-adaptive framework for cooperative multi-robot system,” in 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2020, pp. 2991–2998.
- Q. Yang and R. Parasuraman, “Needs-driven heterogeneous multi-robot cooperation in rescue missions,” in 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2020, pp. 252–259.
- Q. Yang, “Self-adaptive swarm system,” Ph.D. dissertation, University of Georgia, 2022.
- Q. Yang and R. Parasuraman, “A strategy-oriented bayesian soft actor-critic model,” Procedia Computer Science, vol. 220, pp. 561–566, 2023.
- Q. Yang, “Self-adaptive swarm system (sass),” Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 5040–5041, , 2021, doctoral Consortium.
- R. Rădulescu, P. Mannion, D. M. Roijers, and A. Nowé, “Multi-objective multi-agent decision making: a utility-based analysis and survey,” Autonomous Agents and Multi-Agent Systems, vol. 34, no. 1, p. 10, 2020.
- L. M. Zintgraf, T. V. Kanters, D. M. Roijers, F. Oliehoek, and P. Beau, “Quality assessment of morl algorithms: A utility-based approach,” in Benelearn 2015: proceedings of the 24th annual machine learning conference of Belgium and the Netherlands, 2015.
- D. Paccagnan, R. Chandan, and J. R. Marden, “Utility and mechanism design in multi-agent systems: An overview,” Annual Reviews in Control, 2022.
- B. Browning, J. Bruce, M. Bowling, and M. Veloso, “Stp: Skills, tactics, and plays for multi-robot control in adversarial environments,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, vol. 219, no. 1, pp. 33–52, 2005.
- J.-l. Zhang, D.-l. Qi, and M. Yu, “A game theoretic approach for the distributed control of multi-agent systems under directed and time-varying topology,” International Journal of Control, Automation and Systems, vol. 12, no. 4, pp. 749–758, 2014.
- E. Bakolas and Y. Lee, “Decentralized game-theoretic control for dynamic task allocation problems for multi-agent systems,” in 2021 American Control Conference (ACC). IEEE, 2021, pp. 3228–3233.
- N. Agmon, G. A. Kaminka, and S. Kraus, “Multi-robot adversarial patrolling: facing a full-knowledge opponent,” Journal of Artificial Intelligence Research, vol. 42, pp. 887–916, 2011.
- Y. Shapira and N. Agmon, “Path planning for optimizing survivability of multi-robot formation in adversarial environments,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015, pp. 4544–4549.
- T. H. Chung, G. A. Hollinger, and V. Isler, “Search and pursuit-evasion in mobile robotics,” Autonomous robots, vol. 31, no. 4, p. 299, 2011.
- M. Kothari, J. G. Manathara, and I. Postlethwaite, “A cooperative pursuit-evasion game for non-holonomic systems,” IFAC Proceedings Volumes, vol. 47, no. 3, pp. 1977–1984, 2014.
- V. R. Makkapati and P. Tsiotras, “Optimal evading strategies and task allocation in multi-player pursuit–evasion problems,” Dynamic Games and Applications, pp. 1–20, 2019.
- S. Nadarajah and K. Sundaraj, “A survey on team strategies in robot soccer: team strategies and role description,” Artificial Intelligence Review, vol. 40, no. 3, pp. 271–304, 2013.
- T. Liu, J. Wang, X. Zhang, and D. Cheng, “Game theoretic control of multiagent systems,” SIAM Journal on Control and Optimization, vol. 57, no. 3, pp. 1691–1709, 2019.
- J. R. Marden, G. Arslan, and J. S. Shamma, “Cooperative control and potential games,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 6, pp. 1393–1407, 2009.
- S. Xu and H. Chen, “Nash game based efficient global optimization for large-scale design problems,” Journal of Global Optimization, vol. 71, no. 2, pp. 361–381, 2018.
- M. I. Abouheaf, F. L. Lewis, K. G. Vamvoudakis, S. Haesaert, and R. Babuska, “Multi-agent discrete-time graphical games and reinforcement learning solutions,” Automatica, vol. 50, no. 12, pp. 3038–3053, 2014.
- R. Gopalakrishnan, J. R. Marden, and A. Wierman, “An architectural view of game theoretic control,” ACM SIGMETRICS Performance Evaluation Review, vol. 38, no. 3, pp. 31–36, 2011.
- J. R. Marden and J. S. Shamma, “Game-theoretic learning in distributed control,” in Handbook of dynamic game theory. Springer, 2017, pp. 1–36.
- D. Paccagnan, R. Chandan, and J. R. Marden, “Utility design for distributed resource allocation—part i: Characterizing and optimizing the exact price of anarchy,” IEEE Transactions on Automatic Control, vol. 65, no. 11, pp. 4616–4631, 2019.
- D. Paccagnan and J. R. Marden, “Utility design for distributed resource allocation—part ii: Applications to submodular, covering, and supermodular problems,” IEEE Transactions on Automatic Control, vol. 67, no. 2, pp. 618–632, 2021.
- J. R. Marden and J. S. Shamma, “Game theory and control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 1, pp. 105–134, 2018.
- Q. Yang and R. Parasuraman, “A hierarchical game-theoretic decision-making for cooperative multiagent systems under the presence of adversarial agents,” in Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, 2023, pp. 773–782.
- Q. Yang and R. Parasuraman, “Game-theoretic utility tree for multi-robot cooperative pursuit strategy,” ISR Europe 2022; 54th International Symposium on Robotics, 2022, pp. 1–7, 2022.
- D. Pickem, P. Glotfelter, L. Wang, M. Mote, A. Ames, E. Feron, and M. Egerstedt, “The robotarium: A remotely accessible swarm robotics research testbed,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 1699–1706.
- J.-H. Cho, “Tradeoffs between trust and survivability for mission effectiveness in tactical networks,” IEEE transactions on cybernetics, vol. 45, no. 4, pp. 754–766, 2014.
- Y. Rizk, M. Awad, and E. W. Tunstel, “Decision making in multiagent systems: A survey,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 3, pp. 514–529, 2018.
- S. Russell and P. Norvig, “Artificial intelligence: a modern approach,” 2002.
- L. Yu, J. Song, and S. Ermon, “Multi-agent adversarial inverse reinforcement learning,” in Proceedings of the 36th International Conference on Machine Learning. PMLR, 09–15 Jun 2019.
- Z. Zhang, Y.-S. Ong, D. Wang, and B. Xue, “A collaborative multiagent reinforcement learning method based on policy gradient potential,” IEEE transactions on cybernetics, 2019.
- T. Rashid, M. Samvelyan, C. Schroeder, G. Farquhar, J. Foerster, and S. Whiteson, “QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning,” in Proceedings of the 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research. PMLR, 2018.
- L. Panait and S. Luke, “Cooperative multi-agent learning: The state of the art,” Autonomous agents and multi-agent systems, vol. 11, no. 3, pp. 387–434, 2005.
- T. Jansen and R. P. Wiegand, “Exploring the explorative advantage of the cooperative coevolutionary (1+ 1) ea,” in Genetic and Evolutionary Computation Conference. Springer, 2003, pp. 310–321.
- D. H. Wolpert and K. Tumer, “Optimal payoff functions for members of collectives,” in Modeling complexity in economic and social systems. World Scientific, 2002, pp. 355–369.
- T. Balch et al., “Learning roles: Behavioral diversity in robot teams,” in AAAI Workshop on Multiagent Learning, 1997.
- T. Balch, “Reward and diversity in multirobot foraging,” in IJCAI-99 Workshop on Agents Learning About, From and With other Agents, 1999.
- M. J. Mataric, “Learning to behave socially,” From animals to animats, vol. 3, pp. 453–462, 1994.
- S. G. Ficici and J. B. Pollack, “A game-theoretic approach to the simple coevolutionary algorithm,” in International Conference on Parallel Problem Solving from Nature. Springer, 2000, pp. 467–476.
- L. Panait, R. P. Wiegand, and S. Luke, “A visual demonstration of convergence properties of cooperative coevolution,” in International Conference on Parallel Problem Solving from Nature. Springer, 2004, pp. 892–901.
- S. Liu, G. Lever, Z. Wang, J. Merel, S. Eslami, D. Hennes, W. M. Czarnecki, Y. Tassa, S. Omidshafiei, A. Abdolmaleki et al., “From motor control to team play in simulated humanoid football,” arXiv preprint arXiv:2105.12196, 2021.
- S. Tunyasuvunakool, A. Muldal, Y. Doron, S. Liu, S. Bohez, J. Merel, T. Erez, T. Lillicrap, N. Heess, and Y. Tassa, “dm_control: Software and tasks for continuous control,” Software Impacts, vol. 6, p. 100022, 2020.
- J. Scharpff, M. Spaan, L. Volker, and M. de Weerdt, “Coordinating stochastic multiagent planning in a private values setting,” Distributed and multi-agent planning, p. 17, 2013.
- A. Pla, B. Lopez, and J. Murillo, “Multi criteria operators for multi-attribute auctions,” in Modeling Decisions for Artificial Intelligence: 9th International Conference, MDAI 2012, Girona, Catalonia, Spain, November 21-23, 2012. Proceedings 9. Springer, 2012, pp. 318–328.
- R. L. Swinth, “The establishment of the trust relationship,” Journal of conflict resolution, vol. 11, no. 3, pp. 335–344, 1967.
- R. J. Lewicki, E. C. Tomlinson, and N. Gillespie, “Models of interpersonal trust development: Theoretical approaches, empirical evidence, and future directions,” Journal of management, vol. 32, no. 6, pp. 991–1022, 2006.
- K. Arai, “Defining trust using expected utility theory,” Hitotsubashi Journal of Economics, pp. 205–224, 2009.
- J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate reliance,” Human factors, vol. 46, no. 1, pp. 50–80, 2004.
- J.-H. Cho, A. Swami, and R. Chen, “A survey on trust management for mobile ad hoc networks,” IEEE communications surveys & tutorials, vol. 13, no. 4, pp. 562–583, 2010.
- W. Sherchan, S. Nepal, and C. Paris, “A survey of trust in social networks,” ACM Computing Surveys (CSUR), vol. 45, no. 4, pp. 1–33, 2013.
- H. Wang and D.-Y. Yeung, “A survey on bayesian deep learning,” ACM Computing Surveys (CSUR), vol. 53, no. 5, pp. 1–37, 2020.
- J.-H. Cho, K. Chan, and S. Adali, “A survey on trust modeling,” ACM Computing Surveys (CSUR), vol. 48, no. 2, pp. 1–40, 2015.
- R. Chen, F. Bao, and J. Guo, “Trust-based service management for social internet of things systems,” IEEE transactions on dependable and secure computing, vol. 13, no. 6, pp. 684–696, 2015.
- V. Mohammadi, A. M. Rahmani, A. M. Darwesh, and A. Sahafi, “Trust-based recommendation systems in internet of things: a systematic literature review,” Human-centric Computing and Information Sciences, vol. 9, no. 1, pp. 1–61, 2019.
- D. Quercia and S. Hailes, “Mate: Mobility and adaptation with trust and expected-utility,” International Journal of Internet Technology and Secured Transactions (IJITST), vol. 1, pp. 43–53, 2007.
- R. Liu, Z. Cai, M. Lewis, J. Lyons, and K. Sycara, “Trust repair in human-swarm teams+,” in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 2019, pp. 1–6.
- R. Liu, F. Jia, W. Luo, M. Chandarana, C. Nam, M. Lewis, and K. Sycara, “Trust-aware behavior reflection for robot swarm self-healing,” in Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, ser. AAMAS ’19. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems, 2019, p. 122–130.
- Y. Pang and R. Liu, “Trust-aware emergency response for a resilient human-swarm cooperative system,” in 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2021, pp. 15–20.
- R. Luo, C. Huang, Y. Peng, B. Song, and R. Liu, “Repairing human trust by promptly correcting robot mistakes with an attention transfer model,” in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). IEEE, 2021, pp. 1928–1933.
- H. Yu, Z. Shen, C. Leung, C. Miao, and V. R. Lesser, “A survey of multi-agent trust management systems,” IEEE Access, vol. 1, pp. 35–50, 2013.
- K. K. Fullam, T. B. Klos, G. Muller, J. Sabater, A. Schlosser, Z. Topol, K. S. Barber, J. S. Rosenschein, L. Vercouter, and M. Voss, “A specification of the agent reputation and trust (art) testbed: experimentation and competition for trust in agent societies,” in Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, 2005, pp. 512–518.
- I. Pinyol and J. Sabater-Mir, “Computational trust and reputation models for open multi-agent systems: a review,” Artificial Intelligence Review, vol. 40, no. 1, pp. 1–25, 2013.
- Z. R. Khavas, S. R. Ahmadzadeh, and P. Robinette, “Modeling trust in human-robot interaction: A survey,” in International Conference on Social Robotics. Springer, 2020, pp. 529–541.
- P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. De Visser, and R. Parasuraman, “A meta-analysis of factors affecting trust in human-robot interaction,” Human factors, vol. 53, no. 5, pp. 517–527, 2011.
- M. Chen, S. Nikolaidis, H. Soh, D. Hsu, and S. Srinivasa, “Trust-aware decision making for human-robot collaboration: Model learning and planning,” ACM Transactions on Human-Robot Interaction (THRI), vol. 9, no. 2, pp. 1–23, 2020.
- Y. Wang, L. R. Humphrey, Z. Liao, and H. Zheng, “Trust-based multi-robot symbolic motion planning with a human-in-the-loop,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 8, no. 4, pp. 1–33, 2018.
- B. C. Kok and H. Soh, “Trust in robots: Challenges and opportunities,” Current Robotics Reports, vol. 1, pp. 297–309, 2020.
- B. Davis, M. Glenski, W. Sealy, and D. Arendt, “Measure utility, gain trust: practical advice for xai researchers,” in 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). IEEE, 2020, pp. 1–8.
- Q. Yang and R. Parasuraman, “How can robots trust each other for better cooperation? a relative needs entropy based robot-robot trust assessment model,” in 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2021, pp. 2656–2663.
- A. Toichoa Eyam, W. M. Mohammed, and J. L. Martinez Lastra, “Emotion-driven analysis and control of human-robot interactions in collaborative applications,” Sensors, vol. 21, no. 14, p. 4626, 2021.
- T. Sanders, K. E. Oleson, D. R. Billings, J. Y. Chen, and P. A. Hancock, “A model of human-robot trust: Theoretical model development,” in Proceedings of the human factors and ergonomics society annual meeting, vol. 55, no. 1. SAGE Publications Sage CA: Los Angeles, CA, 2011, pp. 1432–1436.
- A. Zacharaki, I. Kostavelis, A. Gasteratos, and I. Dokas, “Safety bounds in human robot interaction: A survey,” Safety science, vol. 127, p. 104667, 2020.
- M. M. de Graaf and S. B. Allouch, “The relation between people’s attitude and anxiety towards robots in human-robot interaction,” in 2013 IEEE RO-MAN. IEEE, 2013, pp. 632–637.
- C. L. Bethel, K. Salomon, R. R. Murphy, and J. L. Burke, “Survey of psychophysiology measurements applied to human-robot interaction,” in RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2007, pp. 732–737.
- L. Tiberio, A. Cesta, and M. Olivetti Belardinelli, “Psychophysiological methods to evaluate user’s response in human robot interaction: a review and feasibility study,” Robotics, vol. 2, no. 2, pp. 92–121, 2013.
- C. Breazeal and B. Scassellati, “How to build robots that make friends and influence people,” in Proceedings 1999 IEEE/RSJ international conference on intelligent robots and systems. Human and environment friendly robots with high intelligence and emotional quotients (cat. No. 99CH36289), vol. 2. IEEE, 1999, pp. 858–863.
- C. Breazeal, G. Hoffman, and A. Lockerd, “Teaching and working with robots as a collaboration,” in AAMAS, vol. 4, 2004, pp. 1030–1037.
- M. Scheutz, P. Schermerhorn, and J. Kramer, “The utility of affect expression in natural language interactions in joint human-robot tasks,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, 2006, pp. 226–233.
- A. Dahiya, A. M. Aroyo, K. Dautenhahn, and S. L. Smith, “A survey of multi-agent human–robot interaction systems,” Robotics and Autonomous Systems, vol. 161, p. 104335, 2023.
- A. Tabrez, M. B. Luebbers, and B. Hayes, “A survey of mental modeling techniques in human–robot teaming,” Current Robotics Reports, vol. 1, pp. 259–267, 2020.
- S. Nikolaidis and J. Shah, “Human-robot cross-training: computational formulation, modeling and evaluation of a human team training strategy,” in 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2013, pp. 33–40.
- S. Nikolaidis, S. Nath, A. D. Procaccia, and S. Srinivasa, “Game-theoretic modeling of human adaptation in human-robot collaboration,” in Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction, 2017, pp. 323–331.
- S. Nikolaidis, Y. X. Zhu, D. Hsu, and S. Srinivasa, “Human-robot mutual adaptation in shared autonomy,” in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 294–302.
- M. Kwon, M. F. Jung, and R. A. Knepper, “Human expectations of social robots,” in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2016, pp. 463–464.
- M. Lewis, K. Sycara, and P. Walker, “The role of trust in human-robot interaction,” Foundations of trusted autonomy, pp. 135–159, 2018.
- J. Rios-Martinez, A. Spalanzani, and C. Laugier, “From proxemics theory to socially-aware navigation: A survey,” International Journal of Social Robotics, vol. 7, pp. 137–153, 2015.
- I. R. Nourbakhsh, K. Sycara, M. Koes, M. Yong, M. Lewis, and S. Burion, “Human-robot teaming for search and rescue,” IEEE Pervasive Computing, vol. 4, no. 1, pp. 72–79, 2005.
- D. Sadigh, N. Landolfi, S. S. Sastry, S. A. Seshia, and A. D. Dragan, “Planning for cars that coordinate with people: leveraging effects on human actions for planning and active information gathering over human internal state,” Autonomous Robots, vol. 42, pp. 1405–1426, 2018.
- B. Kuipers, “How can we trust a robot?” Communications of the ACM, vol. 61, no. 3, pp. 86–95, 2018.
- S. V. Albrecht and P. Stone, “Autonomous agents modelling other agents: A comprehensive survey and open problems,” Artificial Intelligence, vol. 258, pp. 66–95, 2018.
- A. Tabrez, S. Agrawal, and B. Hayes, “Explanation-based reward coaching to improve human performance via reinforcement learning,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2019, pp. 249–257.
- T. Chakraborti, S. Sreedharan, Y. Zhang, and S. Kambhampati, “Plan explanations as model reconciliation: Moving beyond explanation as soliloquy,” arXiv preprint arXiv:1701.08317, 2017.
- D. Leyzberg, A. Ramachandran, and B. Scassellati, “The effect of personalization in longer-term robot tutoring,” ACM Transactions on Human-Robot Interaction (THRI), vol. 7, no. 3, pp. 1–19, 2018.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.