Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation (2202.13536v2)

Published 28 Feb 2022 in cs.LG and cs.AI

Abstract: We consider the problem of learning from observation (LfO), in which the agent aims to mimic the expert's behavior from the state-only demonstrations by experts. We additionally assume that the agent cannot interact with the environment but has access to the action-labeled transition data collected by some agents with unknown qualities. This offline setting for LfO is appealing in many real-world scenarios where the ground-truth expert actions are inaccessible and the arbitrary environment interactions are costly or risky. In this paper, we present LobsDICE, an offline LfO algorithm that learns to imitate the expert policy via optimization in the space of stationary distributions. Our algorithm solves a single convex minimization problem, which minimizes the divergence between the two state-transition distributions induced by the expert and the agent policy. Through an extensive set of offline LfO tasks, we show that LobsDICE outperforms strong baseline methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Geon-Hyeong Kim (3 papers)
  2. Jongmin Lee (50 papers)
  3. Youngsoo Jang (4 papers)
  4. Hongseok Yang (44 papers)
  5. Kee-Eung Kim (24 papers)
Citations (14)