Sharing to learn and learning to share; Fitting together Meta-Learning, Multi-Task Learning, and Transfer Learning: A meta review (2111.12146v8)
Abstract: Integrating knowledge across different domains is an essential feature of human learning. Learning paradigms such as transfer learning, meta-learning, and multi-task learning reflect the human learning process by exploiting the prior knowledge for new tasks, encouraging faster learning and good generalization for new tasks. This article gives a detailed view of these learning paradigms and their comparative analysis. The weakness of one learning algorithm turns out to be a strength of another, and thus, merging them is a prevalent trait in the literature. Numerous research papers focus on each of these learning paradigms separately and provide a comprehensive overview of them. However, this article reviews research studies that combine (two of) these learning algorithms. This survey describes how these techniques are combined to solve problems in many different fields of research, including computer vision, natural language processing, hyper-spectral imaging, and many more, in a supervised setting only. Based on the knowledge accumulated from the literature, we hypothesize a generic task-agnostic and model-agnostic learning network - an ensemble of meta-learning, transfer learning, and multi-task learning, termed Multi-modal Multi-task Meta Transfer Learning. We also present some open research questions, limitations, and future research directions for this proposed network. The aim of this article is to spark interest among scholars in effectively merging existing learning algorithms with the intention of advancing research in this field. Instead of presenting experimental results, we invite readers to explore and contemplate techniques for merging algorithms while navigating through their limitations.
- Using human brain activity to guide machine learning. Scientific Reports, 8, 03 2018.
- Bing Liu. Lifelong machine learning: A paradigm for continuous learning. Front. Comput. Sci., 11:359–361, 2017.
- Rich Caruana. Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on International Conference on Machine Learning, ICML’93, page 41–48, San Francisco, CA, USA, 1993. Morgan Kaufmann Publishers Inc.
- How to study the neural mechanisms of multiple tasks. Current opinion in behavioral sciences, 29:134–143, 2019.
- Michael Crawshaw. Multi-task learning with deep neural networks: A survey. arXiv preprint arXiv:2009.09796, 2020.
- A survey on negative transfer. IEEE/CAA Journal of Automatica Sinica, 10(2):305–329, 2023.
- Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
- Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
- Lifelong Machine Learning. Morgan & Claypool Publishers, 2nd edition, 2018.
- Sebastian Ruder. An overview of multi-task learning in deep neural networks. ArXiv, abs/1706.05098, 2017.
- Yu Zhang and Qiang Yang. An overview of multi-task learning. National Science Review, 5(1):30–43, 2018.
- Multi-task learning for dense prediction tasks: A survey. IEEE transactions on pattern analysis and machine intelligence, 2021.
- A brief review on multi-task learning. Multimedia Tools and Applications, 77(22):29705–29725, 2018.
- Multi-task learning in natural language processing: An overview. arXiv preprint arXiv:2109.09138, 2021.
- Meta-learning in neural networks: A survey. IEEE Transactions on Pattern Analysis & Machine Intelligence, pages 1–1, may 2020.
- Huimin Peng. A comprehensive overview and survey of recent advances in meta-learning. arXiv preprint arXiv:2004.11149, 2020.
- Meta-seg: A survey of meta-learning for image segmentation. Pattern Recognition, page 108586, 2022.
- Meta learning for natural language processing: A survey. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 666–684, Seattle, United States, July 2022. Association for Computational Linguistics.
- A perspective view and survey of meta-learning. Artificial intelligence review, 18:77–95, 2002.
- A survey of deep meta-learning. Artificial Intelligence Review, 54(6):4483–4541, 2021.
- A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43–76, 2021.
- A comparative study of methods for transductive transfer learning. In Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007), pages 77–82, 2007.
- A survey of transfer learning. Journal of Big data, 3(1):1–40, 2016.
- A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010.
- A decade survey of transfer learning (2010–2020). IEEE Transactions on Artificial Intelligence, 1(2):151–166, 2020.
- Transfer learning for wireless networks: A comprehensive survey. Proceedings of the IEEE, 110(8):1073–1115, 2022.
- Thomas M. Mitchell. Machine Learning. McGraw-Hill, Inc., USA, 1 edition, 1997.
- Michael E. Hasselmo. Avoiding catastrophic forgetting. Trends in Cognitive Sciences, 21(6):407–408, 2017.
- Deep Learning. MIT Press, Cambridge, MA, USA, 2016. http://www.deeplearningbook.org.
- Inductive Transfer, pages 545–548. Springer US, Boston, MA, 2010.
- Rich Caruana. Multitask learning. Mach. Learn., 28(1):41–75, July 1997.
- Self-taught learning: Transfer learning from unlabeled data. In Proceedings of the 24th International Conference on Machine Learning, ICML ’07, page 759–766, New York, NY, USA, 2007. Association for Computing Machinery.
- A brief review of domain adaptation. In Robert Stahlbock, Gary M. Weiss, Mahmoud Abou-Nasr, Cheng-Ying Yang, Hamid R. Arabnia, and Leonidas Deligiannidis, editors, Advances in Data Science and Information Engineering, pages 877–894, Cham, 2021. Springer International Publishing.
- Self-taught clustering. In Proceedings of the 25th international conference on Machine learning, pages 200–207, 2008.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
- Pad-net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 675–684, 2018.
- Taskonomy: Disentangling task transfer learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018.
- The benefit of multitask representation learning. Journal of Machine Learning Research, 17(81):1–32, 2016.
- Learning To Learn. Kluwer Academic Publishers, Boston, MA, 1998.
- Jonathan Baxter. Theoretical models of learning to learn. In Learning to learn, pages 71–94. Springer, 1998.
- Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135. PMLR, 06–11 Aug 2017.
- Learning to generalize: Meta-learning for domain generalization. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
- Meta self-learning for multi-source domain adaptation: A benchmark. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1592–1601, 2021.
- Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019.
- Multimodality in meta-learning: A comprehensive survey. Knowledge-Based Systems, page 108976, 2022.
- Meta multi-task learning for sequence modeling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
- Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- Meta-learning multi-task communication. arXiv preprint arXiv:1810.09988, 2018.
- Meta-learning for effective multi-task and multilingual modelling. arXiv preprint arXiv:2101.10368, 2021.
- On first-order meta-learning algorithms. CoRR, abs/1803.02999, 2018.
- Meta multi-task learning for speech emotion recognition. In INTERSPEECH, pages 3336–3340, 2020.
- Multi-task learning via adaptation to similar tasks for mortality prediction of diverse rare diseases. AMIA … Annual Symposium proceedings. AMIA Symposium, 2020:763–772, 2020.
- Generating personalized dialogue via multi-task meta-learning. ArXiv, abs/2108.03377, 2021.
- St2: Small-data text style transfer via multi-task meta-learning. arXiv preprint arXiv:2004.11742, 2020.
- Attentive feature reuse for multi task meta learning. ArXiv, abs/2006.07438, 2020.
- Rapid learning or feature reuse? towards understanding the effectiveness of maml. In International Conference on Learning Representations, 2020.
- Multi-task meta learning: learn how to adapt to unseen tasks. In 2023 International Joint Conference on Neural Networks (IJCNN), pages 1–10, 2023.
- Indoor segmentation and support inference from rgbd images. In Andrew Fitzgibbon, Svetlana Lazebnik, Pietro Perona, Yoichi Sato, and Cordelia Schmid, editors, Computer Vision – ECCV 2012, pages 746–760, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
- Data-efficient brain connectome analysis via multi-task meta-learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, aug 2022.
- Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation. In International Conference on Machine Learning, pages 10991–11002. PMLR, 2021.
- Revisit multimodal meta-learning through the lens of multi-task learning. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
- Measuring and harnessing transference in multi-task learning. CoRR, abs/2010.15413, 2020.
- Leaving no one behind: A multi-scenario multi-task meta learning approach for advertiser modeling. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, WSDM ’22, page 1368–1376, New York, NY, USA, 2022. Association for Computing Machinery.
- TaskNorm: Rethinking batch normalization for meta-learning. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1153–1164. PMLR, 13–18 Jul 2020.
- Bayesian meta-learning for few-shot policy adaptation across robotic platforms. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1274–1280, 2021.
- Keep learning: Self-supervised meta-learning for learning from inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 63–77, Online, April 2021. Association for Computational Linguistics.
- Hierarchical inter-attention network for document classification with multi-task learning. In IJCAI, 2019.
- Localization-aware meta tracker guided with adversarial features. IEEE Access, 7:99441–99450, 2019.
- Hidden incentives for self-induced distributional shift, 2020.
- Federico Retyk. On Meta-Reinforcement Learning in task distributions with varying dynamics. PhD thesis, UPC, Facultat d’Informàtica de Barcelona, Departament de Ciències de la Computació, Apr 2021.
- Towards deployment of robust cooperative ai agents: An algorithmic framework for learning adaptive policies. In 19th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pages 447–455, May 2020.
- Towards deployment of robust ai agents for human-machine partnerships. ArXiv, abs/1910.02330, 2019.
- Multi-task reinforcement learning in partially observable stochastic environments. Journal of Machine Learning Research, 10(40):1131–1186, 2009.
- An integrated transfer learning and multitask learning approach for pharmacokinetic parameter prediction. Molecular Pharmaceutics, 16(2):533–541, Dec 2018.
- A multitask-aided transfer learning-based diagnostic framework for bearings under inconsistent working conditions. Sensors, 20(24), 2020.
- Localization of fake news detection via multitask transfer learning. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 2596–2604, Marseille, France, May 2020. European Language Resources Association.
- Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. arXiv:1810.04805.
- Universal language model fine-tuning for text classification, 2018. arXiv:1801.06146.
- Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- Deep learning for named entity recognition on chinese electronic medical records: Combining deep transfer learning with multitask bi-directional lstm rnn. PLOS ONE, 14(5):1–15, 05 2019.
- Cross-lingual transfer learning and multitask learning for capturing multiword expressions. In Proceedings of the Joint Workshop on Multiword Expressions and WordNet (MWE-WN 2019), pages 155–161, Florence, Italy, August 2019. Association for Computational Linguistics.
- Structured neural decoding with multitask transfer learning of deep neural network representations. IEEE Transactions on Neural Networks and Learning Systems, pages 1–15, 2020.
- Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, page 1097–1105, Red Hook, NY, USA, 2012. Curran Associates Inc.
- Auto-encoding variational bayes, 2014.
- Generative adversarial networks, 2014. arXiv:1406.2661.
- Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th annual meeting of the association of computational linguistics, pages 440–447, 2007.
- Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330, 1993.
- Erik F. Tjong Kim Sang and Sabine Buchholz. Introduction to the conll-2000 shared task: Chunking. In Claire Cardie, Walter Daelemans, Claire Nedellec, and Erik Tjong Kim Sang, editors, Proceedings of CoNLL-2000 and LLL-2000, pages 127–132. Lisbon, Portugal, 2000.
- Erik F. Tjong Kim Sang and Fien De Meulder. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147, 2003.
- Adversarial multi-task learning for text classification, 2017.
- Ava: A large-scale database for aesthetic visual analysis. 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2408–2415, 2012.
- XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4411–4421. PMLR, 13–18 Jul 2020.
- Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4034–4043, Marseille, France, May 2020. European Language Resources Association.
- Cross-lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958, Vancouver, Canada, July 2017. Association for Computational Linguistics.
- Aligning sentences between comparable texts of different styles. In Xin Wang, Francesca A. Lisi, Guohui Xiao, and Elena Botoeva, editors, Semantic Technology, pages 51–64, Singapore, 2020. Springer Singapore.
- Aligning sentences from standard Wikipedia to Simple Wikipedia. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 211–217, Denver, Colorado, May–June 2015. Association for Computational Linguistics.
- Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia, July 2018. Association for Computational Linguistics.
- Scannet: Richly-annotated 3d reconstructions of indoor scenes. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2432–2443, 2017.
- DIODE: A Dense Indoor and Outdoor DEpth Dataset. CoRR, abs/1908.00463, 2019.
- Matching networks for one shot learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
- Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6):1452–1464, 2018.
- Iemocap: interactive emotional dyadic motion capture database. Language Resources and Evaluation, 42:335–359, 2008.
- The eicu collaborative research database, a freely available multi-center database for critical care research. Scientific Data, 5:180178, 09 2018.
- Mimic-iii, a freely accessible critical care database. Scientific Data, 3:160035, 05 2016.
- The parkinson progression marker initiative (ppmi). Progress in Neurobiology, 95(4):629–635, 2011. Biological Markers for Neurodegenerative Diseases.
- Moleculenet: a benchmark for molecular machine learning. Chem. Sci., 9:513–530, 2018.
- DrugBank: a knowledgebase for drugs, drug actions and drug targets. Nucleic Acids Research, 36:D901–D906, 11 2007.
- Case Western Reserve University. Bearing data center website. https://csegroups.case.edu/bearingdatacenter/pages/welcome-case-western-reserve-university-bearingdata-center-website, 2017. pp. 2–3.
- Jan Christian Blaise Cruz and Charibeth Cheng. Evaluating language model finetuning techniques for low-resource languages. CoRR, abs/1907.00409, 2019.
- Universal Language Model Fine-tuning for Text Classification. arXiv e-prints, page arXiv:1801.06146, January 2018.
- A multiclass classification method based on deep learning for named entity recognition in electronic medical records. In 2016 New York Scientific Data Summit (NYSDS), pages 1–10, 2016.
- Building a comprehensive syntactic and semantic corpus of chinese clinical texts. Journal of Biomedical Informatics, 69:203–217, 2017.
- PARSEME corpus release 1.3. In Proceedings of the 19th Workshop on Multiword Expressions (MWE 2023), pages 24–35, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics.
- Data-efficient multirobot, multitask transfer learning for trajectory tracking. IEEE Robotics and Automation Letters, 3(2):1260–1267, 2018.
- Identifying natural images from human brain activity. Nature, 452(7185):352–355, 2008. PMID: 18322462; PMCID: PMC3556484.
- Structured neural decoding with multi-task transfer learning of deep neural network representations. IEEE Transactions on Neural Networks and Learning Systems, 2022.
- Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pages 3730–3738, 2015.
- Manga-mmtl: Multimodal multitask transfer learning for manga character analysis. In Josep Lladós, Daniel Lopresti, and Seiichi Uchida, editors, Document Analysis and Recognition – ICDAR 2021, pages 410–425, Cham, 2021. Springer International Publishing.
- Building a manga dataset “manga109” with annotations for multimedia applications. IEEE MultiMedia, 27(2):8–18, 2020.
- Few-shot hyperspectral image classification through multitask transfer learning. In 2019 10th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), pages 1–5, 2019.
- University of Pavia. Pavia university scene dataset and pavia centre scene dataset. https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Pavia_Centre_and_University, 2019. Access date: 2023-05-31.
- A multitask transfer learning framework for novel virus-human protein interactions. bioRxiv, 2021.
- Apid interactomes: Providing proteome-based interactomes with controlled quality for multiple species and derived networks. Nucleic Acids Research, 44:gkw363, 04 2016.
- The intact molecular interaction database in 2012. Nucleic acids research, 40:D841–6, 11 2011.
- Virusmentha: A new resource for virus-host protein interactions. Nucleic acids research, 43, 09 2014.
- Uniprot: A hub for protein information. Nucleic Acids Research, 43:D204–D212, 11 2014.
- DeNovo: virus-host sequence-based protein–protein interaction prediction. Bioinformatics, 32(8):1144–1150, 12 2015.
- Virusmint: A viral protein interaction database. Nucleic acids research, 37:D669–73, 11 2008.
- HPIDB 2.0: a curated database for host–pathogen interactions. Database, 2016, 07 2016. baw103.
- Meta-transfer learning for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
- Optimization as a model for few-shot learning. In International Conference on Learning Representations, 2017.
- Tadam: Task dependent adaptive metric for improved few-shot learning. arXiv, 2018.
- Meta-transfer learning for zero-shot super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- Ntire 2017 challenge on single image super-resolution: Dataset and study. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1122–1131, 2017.
- Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In Proceedings of the British Machine Vision Conference, pages 135.1–135.10. BMVA Press, 2012.
- A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, volume 2, pages 416–423 vol.2, 2001.
- Single image super-resolution from transformed self-exemplars. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5197–5206, 2015.
- Predicting water temperature dynamics of unmonitored lakes with meta-transfer learning. Water Resources Research, 57(7), Jun 2021.
- The multi-institution north american land data assimilation system (nldas): Utilizing multiple gcip products and partners in a continental distributed hydrological modeling system. Journal of Geophysical Research: Atmospheres, 109(D7), 2004.
- Predicting water temperature dynamics of unmonitored lakes with meta-transfer learning. Water Resources Research, 57(7):e2021WR029579, 2021. e2021WR029579 2021WR029579.
- Meta transfer learning for adaptive vehicle tracking in uav videos. In Yong Man Ro, Wen-Huang Cheng, Junmo Kim, Wei-Ta Chu, Peng Cui, Jung-Woo Choi, Min-Chun Hu, and Wesley De Neve, editors, MultiMedia Modeling, pages 764–777, Cham, 2020. Springer International Publishing.
- Vision meets drones: A challenge, 2018. arXiv:1804.07437.
- A benchmark and simulator for uav tracking. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, pages 445–461, Cham, 2016. Springer International Publishing.
- Visual object tracking for unmanned aerial vehicles: A benchmark and new motion models. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Feb. 2017.
- The unmanned aerial vehicle benchmark: Object detection and tracking. In Proceedings of the European conference on computer vision (ECCV), pages 370–386, 2018.
- Deep residual learning for image recognition, 2015. arXiv:1512.03385.
- Unified rational protein engineering with sequence-only deep representation learning. bioRxiv, 2019.
- Enver Aydin and Seniha Esen Yüksel Erdem. Transfer and multitask learning using convolutional neural networks for buried wire detection from ground penetrating radar data. In Steven S. Bishop and Jason C. Isaacs, editors, Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXIV, volume 11012, pages 259 – 270. International Society for Optics and Photonics, SPIE, 2019.
- Very deep convolutional networks for large-scale image recognition, 2015.
- Q. Xu and Qiang Yang. A survey of transfer and multitask learning in bioinformatics. Journal of Computing Science and Engineering, 5(3):257–268, 2011.
- Transfer Learning: Scenarios, Self-Taught Learning, and Multitask Learning, pages 463–493. Springer International Publishing, Cham, 2019.
- Multiview Transfer Learning and Multitask Learning, pages 85–104. Springer Singapore, Singapore, 2019.
- Chapter 4 - exploring multitask and transfer learning algorithms for head pose estimation in dynamic multiview scenarios. In Vittorio Murino, Marco Cristani, Shishir Shah, and Silvio Savarese, editors, Group and Crowd Behavior for Computer Vision, pages 67–87. Academic Press, 2017.
- Online boosting algorithms for anytime transfer and multitask learning. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1), Feb. 2015.
- Sparse coding for multitask and transfer learning, 2014. arXiv:1209.0738.
- Meta-transfer learning through hard tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2020.
- Meta transfer learning for facial emotion recognition. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 3543–3548, 2018.
- Meta dynamic pricing: Transfer learning across experiments, 2021. arXiv:1902.10918.
- Ultra efficient transfer learning with meta update for cross subject eeg classification, 2021. arXiv:2003.06113.
- Fabio Aiolli. Transfer learning by kernel meta-learning. In Isabelle Guyon, Gideon Dror, Vincent Lemaire, Graham Taylor, and Daniel Silver, editors, Proceedings of ICML Workshop on Unsupervised and Transfer Learning, volume 27 of Proceedings of Machine Learning Research, pages 81–95, Bellevue, Washington, USA, 02 Jul 2012. PMLR.
- Deep neural network for emotion recognition based on meta-transfer learning. IEEE Access, 10:78114–78122, 2022.
- Mdn: Meta-transfer learning method for fake news detection. In Yuqing Sun, Tun Lu, Buqing Cao, Hongfei Fan, Dongning Liu, Bowen Du, and Liping Gao, editors, Computer Supported Cooperative Work and Social Computing, pages 228–237, Singapore, 2022. Springer Nature Singapore.
- Multi-modal meta multi-task learning for social media rumor detection. IEEE Transactions on Multimedia, 24:1449–1459, 2022.
- Open-ended learning leads to generally capable agents, 2021. arXiv:2107.12808.