Adaptation and Communication in Human-Robot Teaming to Handle Discrepancies in Agents' Beliefs about Plans (2307.03362v1)
Abstract: When agents collaborate on a task, it is important that they have some shared mental model of the task routines -- the set of feasible plans towards achieving the goals. However, in reality, situations often arise that such a shared mental model cannot be guaranteed, such as in ad-hoc teams where agents may follow different conventions or when contingent constraints arise that only some agents are aware of. Previous work on human-robot teaming has assumed that the team has a set of shared routines, which breaks down in these situations. In this work, we leverage epistemic logic to enable agents to understand the discrepancy in each other's beliefs about feasible plans and dynamically plan their actions to adapt or communicate to resolve the discrepancy. We propose a formalism that extends conditional doxastic logic to describe knowledge bases in order to explicitly represent agents' nested beliefs on the feasible plans and state of execution. We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action, including communication actions to explain the feasibility of plans, announce intent, and ask questions. Finally, we evaluate the success rate and scalability of the algorithm and show that our agent is better equipped to work in teams without the guarantee of a shared mental model.
- A qualitative theory of dynamic interactive belief revision. Logic and the foundations of game and decision theory (LOFT 7), 3: 9–58.
- Epistemic planning for single-and multi-agent systems. Journal of Applied Non-Classical Logics, 21(1): 9–34.
- Better eager than lazy? How agent types impact the successfulness of implicit coordination. In Sixteenth International Conference on Principles of Knowledge Representation and Reasoning.
- Broida, J. 2021. Active Policy Querying for Dynamic Human-Robot Collaboration Tasks. Master’s thesis, Massachusetts Institute of Technology.
- Balancing Explicability and Explanations in Human-Aware Planning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence.
- Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. In International Joint Conference on Artificial Intelligence.
- Cooperative Epistemic Multi-Agent Planning for Implicit Coordination. In M4M@ICLA.
- EFP 2.0: A Multi-Agent Epistemic Solver with Multiple E-State Representations. In International Conference on Automated Planning and Scheduling.
- Integration of Planning with Recognition for Responsive Interaction Using Classical Planners. In AAAI Conference on Artificial Intelligence.
- Generalized nogoods in CSPs. In AAAI, volume 5, 390–396.
- EFP and PG-EFP: Epistemic Forward Search Planners in Multi-Agent Domains. In International Conference on Automated Planning and Scheduling.
- Levine, S. J. 2019. Risk-bounded coordination of human-robot teams through concurrent intent recognition and adaptation. Ph.D. thesis, Massachusetts Institute of Technology.
- Watching and acting together: Concurrent plan recognition and adaptation for human-robot teams. Journal of Artificial Intelligence Research, 63: 281–359.
- Z3: An efficient SMT solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, 337–340. Springer.
- Planning Over Multi-Agent Epistemic States: A Classical Planning Approach. In AAAI Conference on Artificial Intelligence.
- NOPA: Neurally-guided Online Probabilistic Assistance for Building Socially Intelligent Home Assistants. In International Conference on Robotics and Automation.
- Epistemic multi-agent planning using monte-carlo tree search. In KI 2019: Advances in Artificial Intelligence: 42nd German Conference on AI.
- Towards the role of theory of mind in explanation. In International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, 75–93. Springer.
- Resolving misconceptions about the plans of agents via Theory of Mind. In Proceedings of the International Conference on Automated Planning and Scheduling, volume 32, 719–729.
- Expectation-aware planning: A unifying framework for synthesizing and executing self-explaining plans for human-aware planning. In Proceedings of the AAAI Conference on Artificial Intelligence.
- Decision-making for bidirectional communication in sequential human-robot collaborative tasks. In 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 329–341. IEEE.
- Semi-supervised learning of decision-making models for human-robot collaboration. In Conference on Robot Learning, 192–203. PMLR.
- Semantic results for ontic and epistemic change. Logic and the foundations of game and decision theory (LOFT 7), 3: 87–117.
- Handbook of epistemic logic. College Publications.
- A Logic-Based Explanation Generation Framework for Classical and Hybrid Planning Problems. Journal of Artificial Intelligence Research, 73: 1473–1534.
- A Mental-Model Centric Landscape of Human-AI Symbiosis. CoRR, abs/2202.09447.
- Plan explicability and predictability for robot task planning. In 2017 IEEE international conference on robotics and automation (ICRA), 1313–1320. IEEE.