Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few-shot Class-incremental Learning: A Survey (2308.06764v2)

Published 13 Aug 2023 in cs.LG and cs.AI

Abstract: Few-shot Class-Incremental Learning (FSCIL) presents a unique challenge in Machine Learning (ML), as it necessitates the Incremental Learning (IL) of new classes from sparsely labeled training samples without forgetting previous knowledge. While this field has seen recent progress, it remains an active exploration area. This paper aims to provide a comprehensive and systematic review of FSCIL. In our in-depth examination, we delve into various facets of FSCIL, encompassing the problem definition, the discussion of the primary challenges of unreliable empirical risk minimization and the stability-plasticity dilemma, general schemes, and relevant problems of IL and Few-shot Learning (FSL). Besides, we offer an overview of benchmark datasets and evaluation metrics. Furthermore, we introduce the Few-shot Class-incremental Classification (FSCIC) methods from data-based, structure-based, and optimization-based approaches and the Few-shot Class-incremental Object Detection (FSCIOD) methods from anchor-free and anchor-based approaches. Beyond these, we present several promising research directions within FSCIL that merit further investigation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (112)
  1. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” NeurIPS, vol. 25, 2012.
  2. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778.
  3. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  4. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” NeurIPS, vol. 33, pp. 1877–1901, 2020.
  5. M. De Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, “A continual learning survey: Defying forgetting in classification tasks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 7, pp. 3366–3385, 2022.
  6. G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks: A review,” Neural Networks, vol. 113, pp. 54–71, 2019.
  7. M. Masana, X. Liu, B. Twardowski, M. Menta, A. D. Bagdanov, and J. van de Weijer, “Class-incremental learning: Survey and performance evaluation on image classification,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–20, 2022.
  8. D. Li and Z. Zeng, “Crnet: A fast continual learning framework with random theory,” IEEE Trans. Pattern Anal. Mach. Intell., 2023.
  9. Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, and Y. Fu, “Large scale incremental learning,” in CVPR, 2019, pp. 374–382.
  10. Q. Wang, R. Wang, Y. Wu, X. Jia, and D. Meng, “Cba: Improving online continual learning via continual bias adaptor,” in ICCV, 2023, pp. 19 082–19 092.
  11. G. M. van de Ven, T. Tuytelaars, and A. S. Tolias, “Three types of incremental learning,” Nature Machine Intelligence, pp. 1–13, 2022.
  12. M. Ren, R. Liao, E. Fetaya, and R. Zemel, “Incremental few-shot learning with attention attractor networks,” NeurIPS, vol. 32, 2019.
  13. X. Tao, X. Hong, X. Chang, S. Dong, X. Wei, and Y. Gong, “Few-shot class-incremental learning,” in CVPR, 2020, pp. 12 183–12 192.
  14. C. Zhang, N. Song, G. Lin, Y. Zheng, P. Pan, and Y. Xu, “Few-shot incremental learning with continually evolved classifiers,” in CVPR, 2021, pp. 12 455–12 464.
  15. S. Gidaris and N. Komodakis, “Dynamic few-shot visual learning without forgetting,” in CVPR, 2018, pp. 4367–4375.
  16. D.-W. Zhou, H.-J. Ye, L. Ma, D. Xie, S. Pu, and D.-C. Zhan, “Few-shot class-incremental learning by sampling multi-phase tasks,” IEEE Trans. Pattern Anal. Mach. Intell., 2022.
  17. Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a few examples: A survey on few-shot learning,” ACM Comput. Surv., vol. 53, no. 3, pp. 1–34, 2020.
  18. J. Lu, P. Gong, J. Ye, and C. Zhang, “Learning from very few samples: A survey,” arXiv preprint arXiv:2009.02653, 2020.
  19. S. Antonelli, D. Avola, L. Cinque, D. Crisostomi, G. L. Foresti, F. Galasso, M. R. Marini, A. Mecca, and D. Pannone, “Few-shot object detection: A survey,” ACM Comput. Surv., vol. 54, no. 11s, pp. 1–37, 2022.
  20. G. Huang, I. Laradji, D. Vazquez, S. Lacoste-Julien, and P. Rodriguez, “A survey of self-supervised and few-shot object detection,” IEEE Trans. Pattern Anal. Mach. Intell., 2022.
  21. T. Lesort, V. Lomonaco, A. Stoian, D. Maltoni, D. Filliat, and N. Díaz-Rodríguez, “Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges,” Information Fusion, vol. 58, pp. 52–68, 2020.
  22. E. Belouadah, A. Popescu, and I. Kanellos, “A comprehensive study of class incremental learning algorithms for visual tasks,” Neural Networks, vol. 135, pp. 38–54, 2021.
  23. Z. Mai, R. Li, J. Jeong, D. Quispe, H. Kim, and S. Sanner, “Online continual learning in image classification: An empirical survey,” Neurocomputing, vol. 469, pp. 28–51, 2022.
  24. D.-W. Zhou, Q.-W. Wang, Z.-H. Qi, H.-J. Ye, D.-C. Zhan, and Z. Liu, “Deep class-incremental learning: A survey,” arXiv preprint arXiv:2302.03648, 2023.
  25. L. Wang, X. Zhang, H. Su, and J. Zhu, “A comprehensive survey of continual learning: Theory, method and application,” arXiv preprint arXiv:2302.00487, 2023.
  26. S. Tian, L. Li, W. Li, H. Ran, X. Ning, and P. Tiwari, “A survey on few-shot class-incremental learning,” Neural Networks, vol. 169, pp. 307–324, 2024.
  27. A. Cheraghian, S. Rahman, P. Fang, S. K. Roy, L. Petersson, and M. Harandi, “Semantic-aware knowledge distillation for few-shot class-incremental learning,” in CVPR, 2021, pp. 2534–2543.
  28. A. Kukleva, H. Kuehne, and B. Schiele, “Generalized and incremental few-shot learning by explicit learning and calibration without forgetting,” in ICCV, 2021, pp. 9020–9029.
  29. H. Liu, L. Gu, Z. Chi, Y. Wang, Y. Yu, J. Chen, and J. Tang, “Few-shot class-incremental learning via entropy-regularized data-free replay,” in ECCV.   Springer, 2022, pp. 146–162.
  30. G. Shi, J. Chen, W. Zhang, L.-M. Zhan, and X.-M. Wu, “Overcoming catastrophic forgetting in incremental few-shot learning by finding flat minima,” NeurIPS, vol. 34, pp. 6747–6761, 2021.
  31. P. Mazumder, P. Singh, and P. Rai, “Few-shot lifelong learning,” in AAAI, vol. 35, no. 3, 2021, pp. 2337–2345.
  32. H. Zhao, Y. Fu, M. Kang, Q. Tian, F. Wu, and X. Li, “Mgsvf: Multi-grained slow vs. fast framework for few-shot class-incremental learning,” IEEE Trans. Pattern Anal. Mach. Intell., 2021.
  33. D.-W. Zhou, F.-Y. Wang, H.-J. Ye, L. Ma, S. Pu, and D.-C. Zhan, “Forward compatible few-shot class-incremental learning,” in CVPR, 2022, pp. 9046–9056.
  34. M. Hersche, G. Karunaratne, G. Cherubini, L. Benini, A. Sebastian, and A. Rahimi, “Constrained few-shot class-incremental learning,” in CVPR, 2022, pp. 9057–9067.
  35. B. Yang, M. Lin, Y. Zhang, B. Liu, X. Liang, R. Ji, and Q. Ye, “Dynamic support network for few-shot class incremental learning,” IEEE Trans. Pattern Anal. Mach. Intell., 2022.
  36. Y. Zou, S. Zhang, Y. Li, and R. Li, “Margin-based few-shot class-incremental learning with class-level overfitting mitigation,” NeurIPS, 2022.
  37. Z. Ji, Z. Hou, X. Liu, Y. Pang, and X. Li, “Memorizing complementation network for few-shot class-incremental learning,” IEEE Trans. Image Process., vol. 32, pp. 937–948, 2023.
  38. L. Wang, X. Yang, H. Tan, X. Bai, and F. Zhou, “Few-shot class-incremental sar target recognition based on hierarchical embedding and incremental evolutionary network,” IEEE Trans. on Geoscience and Remote Sensing, vol. 61, pp. 1–11, 2023.
  39. Y. Song, T. Wang, P. Cai, S. K. Mondal, and J. P. Sahoo, “A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities,” ACM Comput. Surv., 2023.
  40. L. Bottou and O. Bousquet, “The tradeoffs of large scale learning,” NeurIPS, vol. 20, 2007.
  41. L. Bottou, F. E. Curtis, and J. Nocedal, “Optimization methods for large-scale machine learning,” SIAM Review, vol. 60, no. 2, pp. 223–311, 2018.
  42. L. Yu, B. Twardowski, X. Liu, L. Herranz, K. Wang, Y. Cheng, S. Jui, and J. v. d. Weijer, “Semantic drift compensation for class-incremental learning,” in CVPR, 2020, pp. 6982–6991.
  43. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” in CVPR, 2017, pp. 2001–2010.
  44. J. Zhu, G. Yao, W. Zhou, G. Zhang, W. Ping, and W. Zhang, “Feature distribution distillation-based few shot class incremental learning,” in PRAI.   IEEE, 2022, pp. 108–113.
  45. Y. Cui, W. Deng, X. Xu, Z. Liu, Z. Liu, M. Pietikäinen, and L. Liu, “Uncertainty-guided semi-supervised few-shot class-incremental learning with knowledge distillation,” IEEE Trans. Multimedia, 2022.
  46. Y. Cui, W. Deng, H. Chen, and L. Liu, “Uncertainty-aware distillation for semi-supervised few-shot class-incremental learning,” IEEE Trans. on Neural Networks and Learning Systems, 2023.
  47. C. Peng, K. Zhao, T. Wang, M. Li, and B. C. Lovell, “Few-shot class-incremental learning from an open-set perspective,” in ECCV.   Springer, 2022, pp. 382–397.
  48. A. Mallya and S. Lazebnik, “Packnet: Adding multiple tasks to a single network by iterative pruning,” in CVPR, 2018, pp. 7765–7773.
  49. D. Maltoni and V. Lomonaco, “Continuous learning in single-incremental-task scenarios,” Neural Networks, vol. 116, pp. 56–73, 2019.
  50. Y. Hu, A. Chapman, G. Wen, and D. W. Hall, “What can knowledge bring to machine learning?—a survey of low-shot learning for structured data,” ACM Trans. on Intelligent Systems and Technology (TIST), vol. 13, no. 3, pp. 1–45, 2022.
  51. W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. Huang, “A closer look at few-shot classification,” in ICLR, 2019.
  52. H.-J. Ye, D.-C. Zhan, Y. Jiang, and Z.-H. Zhou, “Heterogeneous few-shot model rectification with semantic mapping,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 11, pp. 3878–3891, 2020.
  53. I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio, “An empirical investigation of catastrophic forgetting in gradient-based neural networks,” arXiv preprint arXiv:1312.6211, 2013.
  54. W.-L. Chao, S. Changpinyo, B. Gong, and F. Sha, “An empirical study and analysis of generalized zero-shot learning for object recognition in the wild,” in ECCV.   Springer, 2016, pp. 52–68.
  55. H. Qi, M. Brown, and D. G. Lowe, “Low-shot learning with imprinted weights,” in CVPR, 2018, pp. 5822–5830.
  56. S. W. Yoon, D.-Y. Kim, J. Seo, and J. Moon, “Xtarnet: Learning to extract task-adaptive representation for incremental few-shot learning,” in ICML.   PMLR, 2020, pp. 10 852–10 860.
  57. O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra et al., “Matching networks for one shot learning,” NeurIPS, vol. 29, 2016.
  58. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115, pp. 211–252, 2015.
  59. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  60. C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd birds-200-2011 dataset,” 2011.
  61. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in ECCV.   Springer, 2014, pp. 740–755.
  62. J.-M. Perez-Rua, X. Zhu, T. M. Hospedales, and T. Xiang, “Incremental few-shot object detection,” in CVPR, 2020, pp. 13 846–13 855.
  63. M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” Int. J. Comput. Vis., vol. 88, pp. 303–308, 2009.
  64. Z. Pan, X. Yu, M. Zhang, and Y. Gao, “Ssfe-net: Self-supervised feature enhancement for ultra-fine-grained few-shot class incremental learning,” in WACV, 2023, pp. 6275–6284.
  65. A. R. Shankarampeta and K. Yamauchi, “Few-shot class incremental learning with generative feature replay.” in ICPRAM, 2021, pp. 259–267.
  66. M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in ICML.   PMLR, 2017, pp. 214–223.
  67. C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in ICML.   PMLR, 2017, pp. 1126–1135.
  68. A. Agarwal, B. Banerjee, F. Cuzzolin, and S. Chaudhuri, “Semantics-driven generative replay for few-shot class incremental learning,” in ACM MM, 2022, pp. 5246–5254.
  69. K. Zhu, Y. Cao, W. Zhai, J. Cheng, and Z.-J. Zha, “Self-promoted prototype refinement for few-shot class-incremental learning,” in CVPR, 2021, pp. 6801–6810.
  70. T. Martinetz, “Competitive hebbian learning rule forms perfectly topology preserving maps,” in ICANN.   Springer, 1993, pp. 427–434.
  71. B. Yang, M. Lin, B. Liu, M. Fu, C. Liu, R. Ji, and Q. Ye, “Learnable expansion-and-compression network for few-shot class-incremental learning,” arXiv preprint arXiv:2104.02281, 2021.
  72. T. Ahmad, A. R. Dhamija, S. Cruz, R. Rabinowitz, C. Li, M. Jafarzadeh, and T. E. Boult, “Few-shot class incremental learning leveraging self-supervised features,” in CVPR, 2022, pp. 3900–3910.
  73. L. Zhao, J. Lu, Y. Xu, Z. Cheng, D. Guo, Y. Niu, and X. Fang, “Few-shot class-incremental learning via class-aware bilateral distillation,” in CVPR, 2023, pp. 11 838–11 847.
  74. Z. Chi, L. Gu, H. Liu, Y. Wang, Y. Yu, and J. Tang, “Metafscil: a meta-learning approach for few-shot class incremental learning,” in CVPR, 2022, pp. 14 166–14 175.
  75. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” NeurIPS, vol. 30, 2017.
  76. Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798–1828, 2013.
  77. M. Kaya and H. Ş. Bilge, “Deep metric learning: A survey,” Symmetry, vol. 11, no. 9, p. 1066, 2019.
  78. X. Li, X. Yang, Z. Ma, and J.-H. Xue, “Deep metric learning for few-shot image classification: A review of recent developments,” Pattern Recognit., p. 109381, 2023.
  79. A. Cheraghian, S. Rahman, S. Ramasinghe, P. Fang, C. Simon, L. Petersson, and M. Harandi, “Synthesized feature based few-shot class-incremental learning on a mixture of subspaces,” in ICCV, 2021, pp. 8661–8670.
  80. D.-Y. Kim, D.-J. Han, J. Seo, and J. Moon, “Warping the space: Weight space rotation for class-incremental few-shot learning,” in ICLR, 2023.
  81. P. Khorramshahi, N. Peri, J.-c. Chen, and R. Chellappa, “The devil is in the details: Self-supervised attention for vehicle re-identification,” in ECCV.   Springer, 2020, pp. 369–386.
  82. Y. Yang, H. Yuan, X. Li, Z. Lin, P. Torr, and D. Tao, “Neural collapse inspired feature-classifier alignment for few-shot class-incremental learning,” in ICLR, 2023.
  83. V. Papyan, X. Han, and D. L. Donoho, “Prevalence of neural collapse during the terminal phase of deep learning training,” Proceedings of the National Academy of Sciences, vol. 117, no. 40, pp. 24 652–24 663, 2020.
  84. N. Sankaran, “Feature fusion for deep representations,” Ph.D. dissertation, State University of New York at Buffalo, 2021.
  85. T. Ahmad, A. R. Dhamija, M. Jafarzadeh, S. Cruz, R. Rabinowitz, C. Li, and T. E. Boult, “Variable few shot class incremental and open world learning,” in CVPR Workshops, 2022, pp. 3688–3699.
  86. J. Kalla and S. Biswas, “S3c: Self-supervised stochastic classifiers for few-shot class-incremental learning,” in ECCV.   Springer, 2022, pp. 432–448.
  87. A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov et al., “The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale,” Int. J. Comput. Vis., vol. 128, no. 7, pp. 1956–1981, 2020.
  88. H. Lee, S. J. Hwang, and J. Shin, “Self-supervised label augmentation via input transformations,” in ICML.   PMLR, 2020, pp. 5714–5724.
  89. G. Yao, J. Zhu, W. Zhou, and J. Li, “Few-shot class-incremental learning based on representation enhancement,” Journal of Electronic Imaging, vol. 31, no. 4, p. 043027, 2022.
  90. J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: A survey,” Int. J. Comput. Vis., vol. 129, pp. 1789–1819, 2021.
  91. S. Dong, X. Hong, X. Tao, X. Chang, X. Wei, and Y. Gong, “Few-shot class-incremental learning via relation knowledge distillation,” in AAAI, vol. 35, no. 2, 2021, pp. 1255–1263.
  92. Y. Cui, W. Xiong, M. Tavakolian, and L. Liu, “Semi-supervised few-shot class-incremental learning,” in ICIP.   IEEE, 2021, pp. 1239–1243.
  93. T. Hospedales, A. Antoniou, P. Micaelli, and A. Storkey, “Meta-learning in neural networks: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 9, pp. 5149–5169, 2021.
  94. G. Zheng and A. Zhang, “Few-shot class-incremental learning with meta-learned class structures,” in ICDM Workshops.   IEEE, 2021, pp. 421–430.
  95. D. A. Ganea, B. Boom, and R. Poppe, “Incremental few-shot instance segmentation,” in CVPR, 2021, pp. 1185–1194.
  96. L. Yin, J. M. Perez-Rua, and K. J. Liang, “Sylph: A hypernetwork framework for incremental few-shot object detection,” in CVPR, 2022, pp. 9035–9045.
  97. N. Dong, Y. Zhang, M. Ding, and G. H. Lee, “Incremental-detr: Incremental few-shot object detection via self-supervised learning,” in AAAI, 2023.
  98. M. Cheng, H. Wang, and Y. Long, “Meta-learning-based incremental few-shot object detection,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 32, no. 4, pp. 2158–2169, 2021.
  99. H. Feng, L. Zhang, X. Yang, and Z. Liu, “Incremental few-shot object detection via knowledge transfer,” Pattern Recognit. Letters, vol. 156, pp. 67–73, 2022.
  100. X. Zhou, D. Wang, and P. Krähenbühl, “Objects as points,” arXiv preprint arXiv:1904.07850, 2019.
  101. Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: Fully convolutional one-stage object detection,” in ICCV, 2019, pp. 9627–9636.
  102. S. Farquhar and Y. Gal, “Towards robust evaluations of continual learning,” arXiv preprint arXiv:1805.09733, 2018.
  103. S. Zhang, C. Chi, Y. Yao, Z. Lei, and S. Z. Li, “Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection,” in CVPR, 2020, pp. 9759–9768.
  104. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in ECCV.   Springer, 2020, pp. 213–229.
  105. J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” Int. J. Comput. Vis., vol. 104, pp. 154–171, 2013.
  106. K. Nguyen and S. Todorovic, “ifs-rcnn: An incremental few-shot instance segmenter,” in CVPR, 2022, pp. 7010–7019.
  107. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in ICCV, 2017, pp. 2961–2969.
  108. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” NeurIPS, vol. 28, 2015.
  109. X. Wang, T. E. Huang, T. Darrell, J. E. Gonzalez, and F. Yu, “Frustratingly simple few-shot object detection,” in ICML, 2020, pp. 9919–9928.
  110. D. J. Spiegelhalter and S. L. Lauritzen, “Sequential updating of conditional probabilities on directed graphical structures,” Networks, vol. 20, no. 5, pp. 579–605, 1990.
  111. S. B. Klein, “What memory is,” Wiley Interdisciplinary Reviews: Cognitive Science, vol. 6, no. 1, pp. 1–38, 2015.
  112. D. Kudithipudi, M. Aguilar-Simon, J. Babb, M. Bazhenov, D. Blackiston, J. Bongard, A. P. Brna, S. Chakravarthi Raja, N. Cheney, J. Clune et al., “Biological underpinnings for lifelong learning machines,” Nature Machine Intelligence, vol. 4, no. 3, pp. 196–210, 2022.
Citations (6)

Summary

We haven't generated a summary for this paper yet.