Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Foundational Models Defining a New Era in Vision: A Survey and Outlook (2307.13721v1)

Published 25 Jul 2023 in cs.CV and cs.AI

Abstract: Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world. The complex relations between objects and their locations, ambiguities, and variations in the real-world environment can be better described in human language, naturally governed by grammatical rules and other modalities such as audio and depth. The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time. These models are referred to as foundational models. The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions. In this survey, we provide a comprehensive review of such emerging foundational models, including typical architecture designs to combine different modalities (vision, text, audio, etc), training objectives (contrastive, generative), pre-training datasets, fine-tuning mechanisms, and the common prompting patterns; textual, visual, and heterogeneous. We discuss the open challenges and research directions for foundational models in computer vision, including difficulties in their evaluations and benchmarking, gaps in their real-world understanding, limitations of their contextual understanding, biases, vulnerability to adversarial attacks, and interpretability issues. We review recent developments in this field, covering a wide range of applications of foundation models systematically and comprehensively. A comprehensive list of foundational models studied in this work is available at \url{https://github.com/awaisrauf/Awesome-CV-Foundational-Models}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Muhammad Awais (59 papers)
  2. Muzammal Naseer (67 papers)
  3. Salman Khan (244 papers)
  4. Rao Muhammad Anwer (67 papers)
  5. Hisham Cholakkal (78 papers)
  6. Mubarak Shah (208 papers)
  7. Ming-Hsuan Yang (377 papers)
  8. Fahad Shahbaz Khan (225 papers)
Citations (94)

Summary

Foundational Models Defining a New Era in Vision

The paper "Foundational Models Defining a New Era in Vision: A Survey and Outlook" offers a comprehensive overview of the current landscape and future possibilities of foundational models in computer vision. It explores the integration of vision systems with language and other modalities, defining these systems as foundational models due to their capacity for zero-shot, few-shot, and multi-modal prompting.

Overview of Foundational Models

Foundational models distinguish themselves by learning from large-scale data across diverse modalities, including vision, text, audio, and more. These models harness deep neural networks and self-supervised learning to generalize across tasks. This paper synthesizes a wide range of foundational models, detailing their typical architectures, training methods, prompt engineering, and real-world applications.

Architectural and Training Paradigms

Four primary architectural styles emerge from the survey: dual-encoder, fusion, encoder-decoder, and adapted LLM architectures. These designs reflect different approaches to integrating vision and text, with dual-encoder models like CLIP and ALIGN emphasizing parallel processing of modalities, while others fuse modalities at various stages.

Training objectives are another critical focus. The paper outlines diverse loss functions like contrastive objectives and generative models, such as masked LLMing. These objectives guide the model to align, understand, and predict across modalities effectively.

Data and Prompting Strategies

The training of foundational models requires robust datasets. The paper categorizes datasets into image-text pairs, pseudo-labeled datasets, and combinations of benchmarks. Prompt engineering, both at the training and evaluation stages, ensures that models can be directed towards specific tasks using minimal input adjustments.

Applications and Adaptations

Foundational models have demonstrated impressive adaptability across diverse vision tasks. From the zero-shot capabilities of models like CLIP and SAM to the domain adaptation efforts seen in medical imaging and remote sensing, these models bring unprecedented flexibility and generalization.

Notably, segmentation models like SAM leverage large-scale datasets and innovative prompt mechanisms for real-time interaction and enhanced user control. Efforts are also underway to adapt these complex models for mobile applications, highlighting practical challenges in efficient deployment.

Challenges and Future Directions

The paper highlights several challenges facing foundational models, including:

  1. Evaluation and Benchmarking: Establishing robust benchmarks that encapsulate the models' diverse capabilities.
  2. Bias and Fairness: Ensuring ethical deployment by addressing inherent biases in training datasets.
  3. Interpretability and Real-world Understanding: Enhancing model transparency and real-world problem-solving capabilities.

Future research is poised to focus on honing these models' multimodal and interactive capabilities, reducing data and computational resource needs, and improving adversarial robustness and bias mitigation.

Conclusion

The expansive review presented in this paper underlines the transformative impact of foundational models on vision tasks. By integrating vast amounts of data across multiple modalities, these models are paving the way for more versatile, efficient, and intelligent systems capable of addressing complex real-world challenges. As research continues, foundational models will likely evolve to further bridge the gap between computational perception and human-like understanding.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com