Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Information is Power: Intrinsic Control via Information Capture (2112.03899v1)

Published 7 Dec 2021 in cs.LG and cs.AI

Abstract: Humans and animals explore their environment and acquire useful skills even in the absence of clear goals, exhibiting intrinsic motivation. The study of intrinsic motivation in artificial agents is concerned with the following question: what is a good general-purpose objective for an agent? We study this question in dynamic partially-observed environments, and argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model. This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states. We instantiate this approach as a deep reinforcement learning agent equipped with a deep variational Bayes filter. We find that our agent learns to discover, represent, and exercise control of dynamic objects in a variety of partially-observed environments sensed with visual observations without extrinsic reward.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Nicholas Rhinehart (24 papers)
  2. Jenny Wang (3 papers)
  3. Glen Berseth (48 papers)
  4. John D. Co-Reyes (16 papers)
  5. Danijar Hafner (32 papers)
  6. Chelsea Finn (264 papers)
  7. Sergey Levine (531 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.