Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Convex Optimization in Adversarial Markov Decision Processes (1905.07773v1)

Published 19 May 2019 in cs.LG, cs.AI, and stat.ML

Abstract: We consider online learning in episodic loop-free Markov decision processes (MDPs), where the loss function can change arbitrarily between episodes, and the transition function is not known to the learner. We show $\tilde{O}(L|X|\sqrt{|A|T})$ regret bound, where $T$ is the number of episodes, $X$ is the state space, $A$ is the action space, and $L$ is the length of each episode. Our online algorithm is implemented using entropic regularization methodology, which allows to extend the original adversarial MDP model to handle convex performance criteria (different ways to aggregate the losses of a single episode) , as well as improve previous regret bounds.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Aviv Rosenberg (19 papers)
  2. Yishay Mansour (158 papers)
Citations (133)

Summary

We haven't generated a summary for this paper yet.