Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evolving Inborn Knowledge For Fast Adaptation in Dynamic POMDP Problems (2004.12846v2)

Published 27 Apr 2020 in cs.NE, cs.AI, and cs.LG

Abstract: Rapid online adaptation to changing tasks is an important problem in machine learning and, recently, a focus of meta-reinforcement learning. However, reinforcement learning (RL) algorithms struggle in POMDP environments because the state of the system, essential in a RL framework, is not always visible. Additionally, hand-designed meta-RL architectures may not include suitable computational structures for specific learning problems. The evolution of online learning mechanisms, on the contrary, has the ability to incorporate learning strategies into an agent that can (i) evolve memory when required and (ii) optimize adaptation speed to specific online learning problems. In this paper, we exploit the highly adaptive nature of neuromodulated neural networks to evolve a controller that uses the latent space of an autoencoder in a POMDP. The analysis of the evolved networks reveals the ability of the proposed algorithm to acquire inborn knowledge in a variety of aspects such as the detection of cues that reveal implicit rewards, and the ability to evolve location neurons that help with navigation. The integration of inborn knowledge and online plasticity enabled fast adaptation and better performance in comparison to some non-evolutionary meta-reinforcement learning algorithms. The algorithm proved also to succeed in the 3D gaming environment Malmo Minecraft.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Eseoghene Ben-Iwhiwhu (6 papers)
  2. Pawel Ladosz (5 papers)
  3. Jeffery Dick (6 papers)
  4. Wen-Hua Chen (16 papers)
  5. Praveen Pilly (6 papers)
  6. Andrea Soltoggio (20 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.