Differentially Private Reward Functions in Policy Synthesis for Markov Decision Processes (2309.12476v3)
Abstract: Markov decision processes often seek to maximize a reward function, but onlookers may infer reward functions by observing the states and actions of such systems, revealing sensitive information. Therefore, in this paper we introduce and compare two methods for privatizing reward functions in policy synthesis for multi-agent Markov decision processes, which generalize Markov decision processes. Reward functions are privatized using differential privacy, a statistical framework for protecting sensitive data. The methods we develop perturb either (1) each agent's individual reward function or (2) the joint reward function shared by all agents. We show that approach (1) provides better performance. We then develop a polynomial-time algorithm for the numerical computation of the performance loss due to privacy on a case-by-case basis. Next, using approach (1), we develop guidelines for selecting reward function values to preserve goal" and
avoid" states while still remaining private, and we quantify the increase in computational complexity needed to compute policies from privatized rewards. Numerical simulations are performed on three classes of systems and they reveal a surprising compatibility with privacy: using reasonably strong privacy ($\epsilon =1.3$) on average induces as little as a~$5\%$ decrease in total accumulated reward and a $0.016\%$ increase in computation time.
- Alexander Benvenuti (4 papers)
- Calvin Hawkins (8 papers)
- Brandon Fallin (2 papers)
- Bo Chen (309 papers)
- Brendan Bialy (3 papers)
- Miriam Dennis (3 papers)
- Matthew Hale (56 papers)