Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation (2102.02454v12)

Published 4 Feb 2021 in cs.LG and cs.AI

Abstract: Extrapolating beyond-demonstrator (BD) performance through the imitation learning (IL) algorithm aims to learn from and subsequently outperform the demonstrator. To that end, a representative approach is to leverage inverse reinforcement learning (IRL) to infer a reward function from demonstrations before performing RL on the learned reward function. However, most existing reward extrapolation methods require massive demonstrations, making it difficult to be applied in tasks of limited training data. To address this problem, one simple solution is to perform data augmentation to artificially generate more training data, which may incur severe inductive bias and policy performance loss. In this paper, we propose a novel meta learning-based reward extrapolation (MLRE) algorithm, which can effectively approximate the ground-truth rewards using limited demonstrations. More specifically, MLRE first learns an initial reward function from a set of tasks that have abundant training data. Then the learned reward function will be fine-tuned using data of the target task. Extensive simulation results demonstrated that the proposed MLRE can achieve impressive performance improvement as compared to other similar BDIL algorithms.

Summary

We haven't generated a summary for this paper yet.