Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
Gemini 2.5 Pro Premium
44 tokens/sec
GPT-5 Medium
18 tokens/sec
GPT-5 High Premium
18 tokens/sec
GPT-4o
105 tokens/sec
DeepSeek R1 via Azure Premium
83 tokens/sec
GPT OSS 120B via Groq Premium
475 tokens/sec
Kimi K2 via Groq Premium
259 tokens/sec
2000 character limit reached

Contracting With a Reinforcement Learning Agent by Playing Trick or Treat (2410.13520v1)

Published 17 Oct 2024 in cs.GT

Abstract: We study principal-agent problems where a farsighted agent takes costly actions in an MDP. The core challenge in these settings is that agent's actions are hidden to the principal, who can only observe their outcomes, namely state transitions and their associated rewards. Thus, the principal's goal is to devise a policy that incentives the agent to take actions leading to desirable outcomes. This is accomplished by committing to a payment scheme (a.k.a. contract) at each step, specifying a monetary transfer from the principal to the agent for every possible outcome. Interestingly, we show that Markovian policies are unfit in these settings, as they do not allow to achieve the optimal principal's utility and are constitutionally intractable. Thus, accounting for history in unavoidable, and this begets considerable additional challenges compared to standard MDPs. Nevertheless, we design an efficient algorithm to compute an optimal policy, leveraging a compact way of representing histories for this purpose. Unfortunately, the policy produced by such an algorithm cannot be readily implemented, as it is only approximately incentive compatible, meaning that the agent is incentivized to take the desired actions only approximately. To fix this, we design an efficient method to make such a policy incentive compatible, by only introducing a negligible loss in principal's utility. This method can be generally applied to any approximately-incentive-compatible policy, and it generalized a related approach that has already been discovered for classical principal-agent problems to more general settings in MDPs.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube