Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks (2206.09059v2)

Published 18 Jun 2022 in cs.CL, cs.AI, cs.CV, and cs.LG

Abstract: Current state-of-the-art vision-and-LLMs are evaluated on tasks either individually or in a multi-task setting, overlooking the challenges of continually learning (CL) tasks as they arrive. Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks. We present CLiMB, a benchmark to study the challenge of learning multimodal tasks in a CL setting, and to systematically evaluate how upstream continual learning can rapidly generalize to new multimodal and unimodal tasks. CLiMB includes implementations of several CL algorithms and a modified Vision-Language Transformer (ViLT) model that can be deployed on both multimodal and unimodal tasks. We find that common CL methods can help mitigate forgetting during multimodal task learning, but do not enable cross-task knowledge transfer. We envision that CLiMB will facilitate research on a new class of CL algorithms for this challenging multimodal setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tejas Srinivasan (20 papers)
  2. Ting-Yun Chang (10 papers)
  3. Leticia Leonor Pinto Alva (1 paper)
  4. Georgios Chochlakis (12 papers)
  5. Mohammad Rostami (64 papers)
  6. Jesse Thomason (65 papers)
Citations (57)

Summary

We haven't generated a summary for this paper yet.