Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Transferable Concepts in Deep Reinforcement Learning (2005.07870v4)

Published 16 May 2020 in cs.AI

Abstract: While humans and animals learn incrementally during their lifetimes and exploit their experience to solve new tasks, standard deep reinforcement learning methods specialize to solve only one task at a time. As a result, the information they acquire is hardly reusable in new situations. Here, we introduce a new perspective on the problem of leveraging prior knowledge to solve future tasks. We show that learning discrete representations of sensory inputs can provide a high-level abstraction that is common across multiple tasks, thus facilitating the transference of information. In particular, we show that it is possible to learn such representations by self-supervision, following an information theoretic approach. Our method is able to learn concepts in locomotive and optimal control tasks that increase the sample efficiency in both known and unknown tasks, opening a new path to endow artificial agents with generalization abilities.

Summary

We haven't generated a summary for this paper yet.