Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RadGrad: Active learning with loss gradients (1906.07838v1)

Published 18 Jun 2019 in cs.LG, cs.AI, cs.RO, and stat.ML

Abstract: Solving sequential decision prediction problems, including those in imitation learning settings, requires mitigating the problem of covariate shift. The standard approach, DAgger, relies on capturing expert behaviour in all states that the agent reaches. In real-world settings, querying an expert is costly. We propose a new active learning algorithm that selectively queries the expert, based on both a prediction of agent error and a proxy for agent risk, that maintains the performance of unrestrained expert querying systems while substantially reducing the number of expert queries made. We show that our approach, RadGrad, has the potential to improve upon existing safety-aware algorithms, and matches or exceeds the performance of DAgger and variants (i.e., SafeDAgger) in one simulated environment. However, we also find that a more complex environment poses challenges not only to our proposed method, but also to existing safety-aware algorithms, which do not match the performance of DAgger in our experiments.

Citations (3)

Summary

We haven't generated a summary for this paper yet.