Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sleep-Like Unsupervised Replay Improves Performance when Data are Limited or Unbalanced (2402.10956v1)

Published 12 Feb 2024 in cs.NE and cs.LG

Abstract: The performance of artificial neural networks (ANNs) degrades when training data are limited or imbalanced. In contrast, the human brain can learn quickly from just a few examples. Here, we investigated the role of sleep in improving the performance of ANNs trained with limited data on the MNIST and Fashion MNIST datasets. Sleep was implemented as an unsupervised phase with local Hebbian type learning rules. We found a significant boost in accuracy after the sleep phase for models trained with limited data in the range of 0.5-10% of total MNIST or Fashion MNIST datasets. When more than 10% of the total data was used, sleep alone had a slight negative impact on performance, but this was remedied by fine-tuning on the original data. This study sheds light on a potential synaptic weight dynamics strategy employed by the brain during sleep to enhance memory performance when training data are limited or imbalanced.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Anthony Bazhenov (2 papers)
  2. Pahan Dewasurendra (2 papers)
  3. Giri Krishnan (2 papers)
  4. Jean Erik Delanois (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.