Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generalizing from a Few Examples: A Survey on Few-Shot Learning (1904.05046v3)

Published 10 Apr 2019 in cs.LG and cs.AI

Abstract: Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small. Recently, Few-Shot Learning (FSL) is proposed to tackle this problem. Using prior knowledge, FSL can rapidly generalize to new tasks containing only a few samples with supervised information. In this paper, we conduct a thorough survey to fully understand FSL. Starting from a formal definition of FSL, we distinguish FSL from several relevant machine learning problems. We then point out that the core issue in FSL is that the empirical risk minimized is unreliable. Based on how prior knowledge can be used to handle this core issue, we categorize FSL methods from three perspectives: (i) data, which uses prior knowledge to augment the supervised experience; (ii) model, which uses prior knowledge to reduce the size of the hypothesis space; and (iii) algorithm, which uses prior knowledge to alter the search for the best hypothesis in the given hypothesis space. With this taxonomy, we review and discuss the pros and cons of each category. Promising directions, in the aspects of the FSL problem setups, techniques, applications and theories, are also proposed to provide insights for future research.

An Academic Overview of Few-Shot Learning: Generalizing from a Few Examples

The survey paper titled "Generalizing from a Few Examples: A Survey on Few-Shot Learning" by Yaqing Wang, Quanming Yao, James T. Kwok, and Lionel M. Ni, provides a comprehensive examination of Few-Shot Learning (FSL), a critical machine learning paradigm essential for applications with limited data. The authors meticulously delineate FSL from related learning paradigms, identify core challenges, and propose a taxonomy for existing techniques based on how prior knowledge is utilized to mitigate these challenges.

Core Issues in Few-Shot Learning

The paper starts by defining the central problem in FSL: the unreliability of the empirical risk minimizer due to the scarcity of labeled examples. In traditional machine learning, having a large labeled dataset allows the empirical risk minimizer to converge towards the optimal hypothesis effectively. However, in FSL scenarios, where the number of training samples (I) is extremely limited, overfitting becomes a significant concern, making traditional minimizers inadequate. Consequently, the primary challenge in FSL is to utilize prior knowledge efficiently to compensate for the lack of data.

Taxonomy of Few-Shot Learning Methods

The authors categorize FSL methods based on three key perspectives: data, model, and algorithm.

  1. Data Augmentation Methods:
    • Transforming Samples from DD: These methods involve applying learned transformations to the existing few-shot data to generate synthetic samples. Examples include learning geometric transformations or variational auto-encoders to create additional data points.
    • Transforming Samples from Weakly Labeled or Unlabeled Datasets: By leveraging unlabeled or weakly labeled data, methods like semi-supervised learning or pseudo-labeling are used to supplement the few-shot learning dataset.
    • Transforming Samples from Similar Datasets: Techniques here aggregate and adapt samples from related but larger datasets, employing models like generative adversarial networks (GANs) to produce additional data points congruent with the target tasks.
  2. Model-Based Methods:
    • Multitask Learning: This involves sharing or tying model parameters across related tasks, thereby facilitating generalization from shared structures in the data.
    • Embedding Learning: These methods learn a low-dimensional representation where few-shot examples can be effectively discriminated, using techniques such as Prototypical Networks and Siamese Networks.
    • Learning with External Memory: Incorporating an external memory structure, these methods store and refine information from few-shot examples, enhancing model robustness.
    • Generative Modeling: Methods in this category employ probabilistic models to learn distributions from which few-shot samples can be generated, constraining the hypothesis space.
  3. Algorithmic Methods:
    • Refining Existing Parameters: Initialized parameters from pre-trained models are fine-tuned using regularization to adapt to few-shot scenarios.
    • Refining Meta-Learned Parameters: Techniques like Model-Agnostic Meta-Learning (MAML) learn an initialization that can be quickly adapted to new tasks with minimal data, thus optimizing for rapid generalization.
    • Learning the Optimizer: In these methods, a meta-learner outputs optimization steps directly, enabling efficient parameter updates for few-shot tasks.

Implications and Future Directions

The survey emphasizes the importance of integrating multi-modality information, given that some tasks might benefit from rich sources of prior knowledge available in different modalities. Additionally, there's a call for advancements in meta-learning techniques, particularly in avoiding negative transfer and addressing dynamic task distributions in streaming contexts.

Moreover, the survey highlights the diverse range of applications for FSL, from computer vision and natural language processing to robotics and acoustic signal processing. Each application sphere can benefit from tailored FSL strategies, leveraging advancements in embedding learning, memory networks, and generative modeling.

The merits of automated machine learning (AutoML) in evolving FSL techniques are underscored, suggesting a fertile ground for developing automated feature engineering and model selection processes tailored for few-shot scenarios. The theoretical analysis in FSL, including sample complexity and convergence guarantees, remains an open area with significant research potential.

Conclusion

The paper "Generalizing from a Few Examples: A Survey on Few-Shot Learning" offers an in-depth exploration of FSL, providing a structured taxonomy and critical insights into the current state and future trajectory of the field. By systematically leveraging prior knowledge and enhancing data, model, and algorithmic strategies, FSL stands poised to bridge crucial gaps in machine learning, aligning closer with human-like learning capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yaqing Wang (59 papers)
  2. Quanming Yao (102 papers)
  3. James Kwok (23 papers)
  4. Lionel M. Ni (20 papers)
Citations (1,708)
Youtube Logo Streamline Icon: https://streamlinehq.com