Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving and Understanding Variational Continual Learning (1905.02099v1)

Published 6 May 2019 in stat.ML and cs.LG

Abstract: In the continual learning setting, tasks are encountered sequentially. The goal is to learn whilst i) avoiding catastrophic forgetting, ii) efficiently using model capacity, and iii) employing forward and backward transfer learning. In this paper, we explore how the Variational Continual Learning (VCL) framework achieves these desiderata on two benchmarks in continual learning: split MNIST and permuted MNIST. We first report significantly improved results on what was already a competitive approach. The improvements are achieved by establishing a new best practice approach to mean-field variational Bayesian neural networks. We then look at the solutions in detail. This allows us to obtain an understanding of why VCL performs as it does, and we compare the solution to what an `ideal' continual learning solution might be.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Siddharth Swaroop (17 papers)
  2. Cuong V. Nguyen (25 papers)
  3. Thang D. Bui (14 papers)
  4. Richard E. Turner (112 papers)
Citations (49)

Summary

We haven't generated a summary for this paper yet.