2000 character limit reached
Foundation Models for Semantic Novelty in Reinforcement Learning (2211.04878v1)
Published 9 Nov 2022 in cs.LG and cs.AI
Abstract: Effectively exploring the environment is a key challenge in reinforcement learning (RL). We address this challenge by defining a novel intrinsic reward based on a foundation model, such as contrastive language image pretraining (CLIP), which can encode a wealth of domain-independent semantic visual-language knowledge about the world. Specifically, our intrinsic reward is defined based on pre-trained CLIP embeddings without any fine-tuning or learning on the target RL task. We demonstrate that CLIP-based intrinsic rewards can drive exploration towards semantically meaningful states and outperform state-of-the-art methods in challenging sparse-reward procedurally-generated environments.
- Tarun Gupta (16 papers)
- Peter Karkus (29 papers)
- Tong Che (26 papers)
- Danfei Xu (59 papers)
- Marco Pavone (314 papers)