Active Inference through Incentive Design in Markov Decision Processes (2502.07065v1)
Abstract: We present a method for active inference with partial observations in stochastic systems through incentive design, also known as the leader-follower game. Consider a leader agent who aims to infer a follower agent's type given a finite set of possible types. Different types of followers differ in either the dynamical model, the reward function, or both. We assume the leader can partially observe a follower's behavior in the stochastic system modeled as a Markov decision process, in which the follower takes an optimal policy to maximize a total reward. To improve inference accuracy and efficiency, the leader can offer side payments (incentives) to the followers such that different types of them, under the incentive design, can exhibit diverging behaviors that facilitate the leader's inference task. We show the problem of active inference through incentive design can be formulated as a special class of leader-follower games, where the leader's objective is to balance the information gain and cost of incentive design. The information gain is measured by the entropy of the estimated follower's type given partial observations. Furthermore, we demonstrate that this problem can be solved by reducing a single-level optimization through softmax temporal consistency between followers' policies and value functions. This reduction allows us to develop an efficient gradient-based algorithm. We utilize observable operators in the hidden Markov model (HMM) to compute the necessary gradients and demonstrate the effectiveness of our approach through experiments in stochastic grid world environments.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.