Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Computing Policies That Account For The Effects Of Human Agent Uncertainty During Execution In Markov Decision Processes (2109.07436v3)

Published 15 Sep 2021 in cs.AI

Abstract: When humans are given a policy to execute, there can be policy execution errors and deviations in policy if there is uncertainty in identifying a state. This can happen due to the human agent's cognitive limitations and/or perceptual errors. So an algorithm that computes a policy for a human to execute ought to consider these effects in its computations. An optimal Markov Decision Process (MDP) policy that is poorly executed (because of a human agent) maybe much worse than another policy that is suboptimal in the MDP, but considers the human-agent's execution behavior. In this paper we consider two problems that arise from state uncertainty; these are erroneous state-inference, and extra-sensing actions that a person might take as a result of their uncertainty. We present a framework to model the human agent's behavior with respect to state uncertainty, and can be used to compute MDP policies that accounts for these problems. This is followed by a hill climbing algorithm to search for good policies given our model of the human agent. We also present a branch and bound algorithm which can find the optimal policy for such problems. We show experimental results in a Gridworld domain, and warehouse-worker domain. Finally, we present human-subject studies that support our human model assumptions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sriram Gopalakrishnan (23 papers)
  2. Mudit Verma (25 papers)
  3. Subbarao Kambhampati (126 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.