Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deceptive Reinforcement Learning for Privacy-Preserving Planning (2102.03022v1)

Published 5 Feb 2021 in cs.LG, cs.AI, and cs.MA

Abstract: In this paper, we study the problem of deceptive reinforcement learning to preserve the privacy of a reward function. Reinforcement learning is the problem of finding a behaviour policy based on rewards received from exploratory behaviour. A key ingredient in reinforcement learning is a reward function, which determines how much reward (negative or positive) is given and when. However, in some situations, we may want to keep a reward function private; that is, to make it difficult for an observer to determine the reward function used. We define the problem of privacy-preserving reinforcement learning, and present two models for solving it. These models are based on dissimulation -- a form of deception that `hides the truth'. We evaluate our models both computationally and via human behavioural experiments. Results show that the resulting policies are indeed deceptive, and that participants can determine the true reward function less reliably than that of an honest agent.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zhengshang Liu (1 paper)
  2. Yue Yang (146 papers)
  3. Tim Miller (53 papers)
  4. Peta Masters (1 paper)
Citations (17)

Summary

We haven't generated a summary for this paper yet.