Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal Sensor and Actuator Selection for Factored Markov Decision Processes: Complexity, Approximability and Algorithms (2407.07310v2)

Published 10 Jul 2024 in eess.SY, cs.CC, cs.SY, and math.OC

Abstract: Factored Markov Decision Processes (fMDPs) are a class of Markov Decision Processes (MDPs) in which the states (and actions) can be factored into a set of state (and action) variables and can be encoded compactly using a factored representation. In this paper, we consider a setting where the state of the fMDP is not directly observable, and the agent relies on a set of potential sensors to gather information. Each sensor has a selection cost and the designer must select a subset of sensors under a limited budget. We formulate the problem of selecting a set of sensors for fMDPs (under a budget) to maximize the infinite-horizon discounted return provided by the optimal policy. We show the fundamental result that it is NP-hard to approximate this problem to within any non-trivial factor. Our inapproximability results for optimal sensor selection also extend to a general class of Partially Observable MDPs (POMDPs). We then study the dual problem of budgeted actuator selection (at design-time) to maximize the expected return under the optimal policy. Again, we show that it is NP-hard to approximate this problem to within any non-trivial factor. Furthermore, with explicit examples, we show the failure of greedy algorithms for both the sensor and actuator selection problems and provide insights into the factors that cause these problems to be challenging. Despite this, through extensive simulations, we show the practical effectiveness and near-optimal performance of the greedy algorithm for actuator and sensor selection in many real-world and randomly generated instances.

Summary

We haven't generated a summary for this paper yet.