Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ConStruct-VL: Data-Free Continual Structured VL Concepts Learning (2211.09790v2)

Published 17 Nov 2022 in cs.LG, cs.AI, and cs.CV

Abstract: Recently, large-scale pre-trained Vision-and-Language (VL) foundation models have demonstrated remarkable capabilities in many zero-shot downstream tasks, achieving competitive results for recognizing objects defined by as little as short text prompts. However, it has also been shown that VL models are still brittle in Structured VL Concept (SVLC) reasoning, such as the ability to recognize object attributes, states, and inter-object relations. This leads to reasoning mistakes, which need to be corrected as they occur by teaching VL models the missing SVLC skills; often this must be done using private data where the issue was found, which naturally leads to a data-free continual (no task-id) VL learning setting. In this work, we introduce the first Continual Data-Free Structured VL Concepts Learning (ConStruct-VL) benchmark and show it is challenging for many existing data-free CL strategies. We, therefore, propose a data-free method comprised of a new approach of Adversarial Pseudo-Replay (APR) which generates adversarial reminders of past tasks from past task models. To use this method efficiently, we also propose a continual parameter-efficient Layered-LoRA (LaLo) neural architecture allowing no-memory-cost access to all past models at train time. We show this approach outperforms all data-free methods by as much as ~7% while even matching some levels of experience-replay (prohibitive for applications where data-privacy must be preserved). Our code is publicly available at https://github.com/jamessealesmith/ConStruct-VL

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. James Seale Smith (15 papers)
  2. Paola Cascante-Bonilla (17 papers)
  3. Assaf Arbelle (26 papers)
  4. Donghyun Kim (129 papers)
  5. Rameswar Panda (79 papers)
  6. David Cox (48 papers)
  7. Diyi Yang (151 papers)
  8. Zsolt Kira (110 papers)
  9. Rogerio Feris (105 papers)
  10. Leonid Karlinsky (79 papers)
Citations (17)