Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Foundation Models for Semantic Novelty in Reinforcement Learning (2211.04878v1)

Published 9 Nov 2022 in cs.LG and cs.AI

Abstract: Effectively exploring the environment is a key challenge in reinforcement learning (RL). We address this challenge by defining a novel intrinsic reward based on a foundation model, such as contrastive language image pretraining (CLIP), which can encode a wealth of domain-independent semantic visual-language knowledge about the world. Specifically, our intrinsic reward is defined based on pre-trained CLIP embeddings without any fine-tuning or learning on the target RL task. We demonstrate that CLIP-based intrinsic rewards can drive exploration towards semantically meaningful states and outperform state-of-the-art methods in challenging sparse-reward procedurally-generated environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tarun Gupta (16 papers)
  2. Peter Karkus (29 papers)
  3. Tong Che (26 papers)
  4. Danfei Xu (59 papers)
  5. Marco Pavone (314 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.