Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 427 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Active Inference through Incentive Design in Markov Decision Processes (2502.07065v1)

Published 10 Feb 2025 in eess.SY and cs.SY

Abstract: We present a method for active inference with partial observations in stochastic systems through incentive design, also known as the leader-follower game. Consider a leader agent who aims to infer a follower agent's type given a finite set of possible types. Different types of followers differ in either the dynamical model, the reward function, or both. We assume the leader can partially observe a follower's behavior in the stochastic system modeled as a Markov decision process, in which the follower takes an optimal policy to maximize a total reward. To improve inference accuracy and efficiency, the leader can offer side payments (incentives) to the followers such that different types of them, under the incentive design, can exhibit diverging behaviors that facilitate the leader's inference task. We show the problem of active inference through incentive design can be formulated as a special class of leader-follower games, where the leader's objective is to balance the information gain and cost of incentive design. The information gain is measured by the entropy of the estimated follower's type given partial observations. Furthermore, we demonstrate that this problem can be solved by reducing a single-level optimization through softmax temporal consistency between followers' policies and value functions. This reduction allows us to develop an efficient gradient-based algorithm. We utilize observable operators in the hidden Markov model (HMM) to compute the necessary gradients and demonstrate the effectiveness of our approach through experiments in stochastic grid world environments.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.