Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 153 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 212 tok/s Pro
GPT OSS 120B 436 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Incremental Concept Formation over Visual Images Without Catastrophic Forgetting (2402.16933v2)

Published 26 Feb 2024 in cs.LG, cs.AI, cs.CV, and cs.IR

Abstract: Deep neural networks have excelled in machine learning, particularly in vision tasks, however, they often suffer from catastrophic forgetting when learning new tasks sequentially. In this work, we introduce Cobweb4V, an alternative to traditional neural network approaches. Cobweb4V is a novel visual classification method that builds on Cobweb, a human like learning system that is inspired by the way humans incrementally learn new concepts over time. In this research, we conduct a comprehensive evaluation, showcasing Cobweb4Vs proficiency in learning visual concepts, requiring less data to achieve effective learning outcomes compared to traditional methods, maintaining stable performance over time, and achieving commendable asymptotic behavior, without catastrophic forgetting effects. These characteristics align with learning strategies in human cognition, positioning Cobweb4V as a promising alternative to neural network approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. (2002). When and where do we apply what we learn?: A taxonomy for far transfer. Psychological bulletin, 128(4), 612.
  2. (2004). The handbook of multisensory processes. MIT press.
  3. (1992). Explaining basic categories: Feature predictability and information. Psychological bulletin, 111(2), 291.
  4. Crowder, R. G.  (2014). Principles of learning and memory: Classic edition. Psychology Press.
  5. (1989). The cascade-correlation learning architecture. Advances in neural information processing systems, 2.
  6. Feigenbaum, E. A.  (1961). The simulation of verbal learning behavior. In Papers presented at the may 9-11, 1961, western joint ire-aiee-acm computer conference (pp. 121–132).
  7. Fisher, D. H.  (1987). Knowledge acquisition via incremental conceptual clustering. Machine learning, 2, 139–172.
  8. Fisher, D. H.  (1988). A computational account of basic level and typicality effects. In Aaai (pp. 233–238).
  9. (1990). The structure and formation of natural categories. Psychology of Learning and Motivation, 26, 241–284.
  10. (2014). Concept formation: Knowledge and experience in unsupervised learning. Morgan Kaufmann.
  11. French, R. M.  (1999). Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4), 128–135.
  12. (2024). Catastrophic interference is mitigated in naturalistic power-law learning environments. arXiv preprint arXiv:2401.10393.
  13. (1989). Models of incremental concept formation. Artificial intelligence, 40(1-3), 11–61.
  14. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
  15. (2018). Recent advances in convolutional neural networks. Pattern recognition, 77, 354–377.
  16. (2016). Deep residual learning for image recognition. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 770–778).
  17. (2020). A conceptual introduction to bayesian model averaging. Advances in Methods and Practices in Psychological Science, 3(2), 200–215.
  18. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
  19. Jones, G. V.  (1983). Identifying basic categories. Psychological Bulletin, 94(3), 423.
  20. (2016). On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836.
  21. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13), 3521–3526.
  22. (2009). Learning multiple layers of features from tiny images (Tech. Rep.).
  23. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
  24. Langley, P.  (2022). The computational gauntlet of human-like learning. In Proceedings of the aaai conference on artificial intelligence (Vol. 36, pp. 12268–12273).
  25. LeCun, Y.  (1998). The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/.
  26. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
  27. (2017). Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12), 2935–2947.
  28. (2017). Gradient episodic memory for continual learning. Advances in neural information processing systems, 30.
  29. (2016). Trestle: a model of concept formation in structured domains. Advances in Cognitive Systems, 4, 131–150.
  30. (2022). Efficient induction of language models via probabilistic concept formation. In Proceedings of the tenth annual conference on advances in cognitive systems.
  31. (2022). Convolutional cobweb: A model of incremental learning from 2d images. In Proceedings of the ninth annual conference on advances in cognitive systems.
  32. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation (Vol. 24, pp. 109–165). Elsevier.
  33. (1990). Cobweb/3: A portable implementation (Tech. Rep.).
  34. (2019). Continual lifelong learning with neural networks: A review. Neural networks, 113, 54–71.
  35. Quinlan, J. R.  (1986). Induction of decision trees. Machine learning, 1, 81–106.
  36. Robins, A.  (1993). Catastrophic forgetting in neural networks: the role of rehearsal mechanisms. In Proceedings 1993 the first new zealand international two-stream conference on artificial neural networks and expert systems (pp. 65–68).
  37. Robins, A.  (1995). Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2), 123–146.
  38. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533–536.
  39. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671.
  40. (2018). Progress & compress: A scalable framework for continual learning. In International conference on machine learning (pp. 4528–4537).
  41. (2017). Continual learning with deep generative replay. Advances in neural information processing systems, 30.
  42. (2019). Three scenarios for continual learning. arXiv preprint arXiv:1904.07734.
  43. (2022). Three types of incremental learning. Nature Machine Intelligence, 4(12), 1185–1197.
  44. (2023). A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.